I am searching for a way to terminate an apply function early on some condition. Using a for loop, something like:
FDP_HCFA = function(FaultMatrix, TestCosts, GenerateNeighbors, RandomSeed) {
set.seed(RandomSeed)
## number of tests, mind the summary column
nT = ncol(FaultMatrix) - 1
StartingSequence = sample(1:nT)
BestAPFD = APFD_C(StartingSequence, FaultMatrix, TestCosts)
BestPrioritization = StartingSequence
MakingProgress = TRUE
NumberOfIterations = 0
while(MakingProgress) {
BestPrioritizationBefore = BestPrioritization
AllCurrentNeighbors = GenerateNeighbors(BestPrioritization)
for(CurrentNeighbor in AllCurrentNeighbors) {
CurrentAPFD = APFD_C(CurrentNeighbor, FaultMatrix, TestCosts)
if(CurrentAPFD > BestAPFD) {
BestAPFD = CurrentAPFD
BestPrioritization = CurrentNeighbor
break
}
}
if(length(union(list(BestPrioritizationBefore),
list(BestPrioritization))) == 1)
MakingProgress = FALSE
NumberOfIterations = NumberOfIterations + 1
}
}
I would like to rewrite this function using some derivation of apply. In particular, terminating the evaluation of the first individual with increased fitness, thereby avoiding the cost of considering the rest of the population.
I reckon that you don't really grasp the apply family and its purpose. Contrary to the general idea, they're not the equivalent of any for-loop. One can say that most for-loops are the equivalent of an apply, but that's another matter.
Apply does exactly as it says: it applies a function on a number of similar arguments sequentially, and returns the result. Hence, by definition you cannot break out of an apply. You're not operating in the global environment any more, so in principle you cannot keep global counters, check after each execution some condition and adapt the loop. You can access the global environment and even change variables using assign or <<-, but this is pretty dangerous.
To understand the difference, don't read apply(1:3,afunc) as for(i in 1:3) afunc(i), but as
afunc(1)
afunc(2)
afunc(3)
in one (block) statement. That reflects better what you're doing exactly. An equivalent for break in an apply simply doesn't make sense, as it is more a block of code than a loop.
Aside from getting your sample code to work* I think this is a clear case where a loop is the right choice. Although R can apply a function to a whole vector of variables [EDIT: but you have to decide what they are before applying], in this case I'd use a while loop to avoid the cost of running unnecessary repetitions. Caveat: I know for loops have compared favorably with apply in timing tests, but I have not seen a similar test for while. Check out some of the options at http://cran.r-project.org/doc/manuals/R-lang.html#Control-structures.
while ( *statement1* ) *statement2*
Related
I have a function that I am optimizing using the optimx function in R (I'm also open to using optim, since I'm not sure it will make a difference for what I'm trying to do). I have a gradient that I am passing to optimx for (hopefully) faster convergence compared to not using a gradient. Both the function and the gradient use many of the same quantities that are computed from each new parameter set. One of these quantities in particular is very computationally costly, and it's redundant to have to compute this quantity twice for each iteration - once for the function, and again for the gradient. I'm trying to find a way to compute this quantity once, then pass it to the function and the gradient.
So here is what I am doing. So far this works, but it is inefficient:
optfunc<-function(paramvec){
quant1<-costlyfunction(paramvec)
#costlyfunction is a separate function that takes a while to run
loglikelihood<-sum(quant1)**2
#not really squared, but the log likelihood uses quant1 in its calculation
return(loglikelihood)
}
optgr<-function(paramvec){
quant1<-costlyfunction(paramvec)
mygrad<-sum(quant1) #again not the real formula, just for illustration
return(mygrad)
}
optimx(par=paramvec,fn=optfunc,gr=optgr,method="BFGS")
I am trying to find a way to calculate quant1 only once with each iteration of optimx. It seems the first step would be to combine fn and gr into a single function. I thought the answer to this question may help me, and so I recoded the optimization as:
optfngr<-function(){
quant1<-costlyfunction(paramvec)
optfunc<-function(paramvec){
loglikelihood<-sum(quant1)**2
return(loglikelihood)
}
optgr<-function(paramvec){
mygrad<-sum(quant1)
return(mygrad)
}
return(list(fn = optfunc, gr = optgr))
}
do.call(optimx, c(list(par=paramvec,method="BFGS",optfngr() )))
Here, I receive the error: "Error in optimx.check(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower, : Cannot evaluate function at initial parameters." Of course, there are obvious problems with my code here. So, I'm thinking answering any or all of the following questions may shed some light:
I passed paramvec as the only arguments to optfunc and optgr so that optimx knows that paramvec is what needs to be iterated over. However, I don't know how to pass quant1 to optfunc and optgr. Is it true that if I try to pass quant1, then optimx will not properly identify the parameter vector?
I wrapped optfunc and optgr into one function, so that the quantity quant1 will exist in the same function space as both functions. Perhaps I can avoid this if I can find a way to return quant1 from optfunc, and then pass it to optgr. Is this possible? I'm thinking it's not, since the documentation for optimx is pretty clear that the function needs to return a scalar.
I'm aware that I might be able to use the dots arguments to optimx as extra parameter arguments, but I understand that these are for fixed parameters, and not arguments that will change with each iteration. Unless there is also a way to manipulate this?
Thanks in advance!
Your approach is close to what you want, but not quite right. You want to call costlyfunction(paramvec) from within optfn(paramvec) or optgr(paramvec), but only when paramvec has changed. Then you want to save its value in the enclosing frame, as well as the value of paramvec that was used to do it. That is, something like this:
optfngr<-function(){
quant1 <- NULL
prevparam <- NULL
updatecostly <- function(paramvec) {
if (!identical(paramvec, prevparam)) {
quant1 <<- costlyfunction(paramvec)
prevparam <<- paramvec
}
}
optfunc<-function(paramvec){
updatecostly(paramvec)
loglikelihood<-sum(quant1)**2
return(loglikelihood)
}
optgr<-function(paramvec){
updatecostly(paramvec)
mygrad<-sum(quant1)
return(mygrad)
}
return(list(fn = optfunc, gr = optgr))
}
do.call(optimx, c(list(par=paramvec,method="BFGS"),optfngr() ))
I used <<- to make assignments to the enclosing frame, and fixed up your do.call second argument.
Doing this is called "memoization" (or "memoisation" in some locales; see http://en.wikipedia.org/wiki/Memoization), and there's a package called memoise that does it. It keeps track of lots of (or all of?) the previous results of calls to costlyfunction, so would be especially good if paramvec only takes on a small number of values. But I think it won't be so good in your situation because you'll likely only make a small number of repeated calls to costlyfunction and then never use the same paramvec again.
I'm working on a script that finds text in large PDFs, and I have the bare bones script written out. I'm trying to refactor my code to encapsulate the main while loop in a function, so I can run sapply() on it with a list of the PDFs. Some of the functions that I call within the main loop require values from that main loop: here's a stripped down, pseudo-version of my code:
pdfParse <- function() {
N <- sample(1:50, 1)*2
n = N/2; i = 0
while (i <= N) {
what <- whatP(n)
i = i + length(what)
if !length(what) {break}
else {n <- N/2 - i}
}
n
}
res <- sample(0:1, N)
r = 1
whatP <- function(t) {
r = r*2
if (t%%3) {
if (t%%5) {
return(res[(n/r):n])
} else {
whatP((rev(t)[1]):(rev(t)[1] + r))
} else {return(rep(NaN, 2))}
}
So my question is, how do I access the variable n that I've defined in the pdfParse function within the function it calls? Even if it's possible, I'd like to avoid assigning it as a global variable. I've read a bit into closures, but I'm not sure if that's an applicable solution here.
Edit: For clarification, whatP(n) starts out with n as its initial argument, but it's recursive, so depending on whether certain conditions are fulfilled, it may end up operating on a vector that doesn't even include n. but I still want to return the something that depends on the original n I defined in pdfParse
The simplest (and probably safest, given that your res function is recursive) is to make n an argument of whatP.
whatP <- function(t,n) {
...
}
and then call it from pdfParse with two arguments instead of one.
If for some reason you don't want to do this, then you have two options
(a) you can actually just use n as though it were in scope. R's rules for where it looks for a variable are very different from, say, C(++). In order, R searches in
the environment of the current function
the environment of its parent (the function that called it)
the environment of its parent's parent and so on
the global environment
the environments of loaded packages, in the same order they appear in search().
Since your function is being called from within the function that defined n, it will find the appropriate value under the second (or third, given it's recursive) bullet.
(b) you can use get with a suitable (negative) value of pos, corresponding to the parent. (Alternatively, use sys.frame). Not recommended here as it's tricky to get right with recursive functions, but can be useful in other situations (and it will bypass any n you might have redefined in the meantime in another, closer, scope).
I've been writing (unsophisticated) code for a decent while, and I feel like I have a somewhat firm grasp on while and for loops and if/else statements. I should also say that I feel like I understand (at my level, at least) the concept of recursion. That is, I understand how a method keeps calling itself until the parameters of an iteration match a base case in the method, at which point the methods begin to terminate and pass control (along with values) to previous instances and eventually an overall value of the first call is determined. I may not have explained it very well, but I think I understand it, and I can follow/make traces of the structured examples I've seen. But my question is on creating recursive methods in the wild, ie, in unstructured circumstances.
Our professor wants us to write recursively at every opportunity, and has made the (technically inaccurate?) statement that all loops can be replaced with recursion. But, since many times recursive operations are contained within while or for loops, this means, to state the obvious, not every loop can be replaced with recursion. So...
For unstructured/non-classroom situations,
1) how can I recognize that a loop situation can/cannot be turned into a recursion, and
2) what is the overall idea/strategy to use when applying recursion to a situation? I mean, how should I approach the problem? What aspects of the problem will be used as recursive criteria, etc?
Thanks!
Edit 6/29:
While I appreciate the 2 answers, I think maybe the preamble to my question was too long because it seems to be getting all of the attention. What I'm really asking is for someone to share with me, a person who "thinks" in loops, an approach for implementing recursive solutions. (For purposes of the question, please assume I have a sufficient understanding of the solution, but just need to create recursive code.) In other words, to apply a recursive solution, what am I looking for in the problem/solution that I will then use for the recursion? Maybe some very general statements about applying recursion would be helpful too. (note: please, not definitions of recursion, since I think I pretty much understand the definition. It's just the process of applying them I am asking about.) Thanks!
Every loop CAN be turned into recursion fairly easily. (It's also true that every recursion can be turned into loops, but not always easily.)
But, I realize that saying "fairly easily" isn't actually very helpful if you don't see how, so here's the idea:
For this explanation, I'm going to assume a plain vanilla while loop--no nested loops or for loops, no breaking out of the middle of the loop, no returning from the middle of the loop, etc. Those other things can also be handled but would muddy up the explanation.
The plain vanilla while loop might look like this:
1. x = initial value;
2. while (some condition on x) {
3. do something with x;
4. x = next value;
5. }
6. final action;
Then the recursive version would be
A. def Recursive(x) {
B. if (some condition on x) {
C. do something with x;
D. Recursive(next value);
E. }
F. else { # base case = where the recursion stops
G. final action;
H. }
I.
J. Recursive(initial value);
So,
the initial value of x in line 1 became the orginial argument to Recursive on line J
the condition of the loop on line 2 became the condition of the if on line B
the first action inside the loop on line 3 became the first action inside the if on line C
the next value of x on line 4 became the next argument to Recursive on line D
the final action on line 6 became the action in the base case on line G
If more than one variable was being updated in the loop, then you would often have a corresponding number of arguments in the recursive function.
Again, this basic recipe can be modified to handle fancier situations than plain vanilla while loops.
Minor comment: In the recursive function, it would be more common to put the base case on the "then" side of the if instead of the "else" side. In that case, you would flip the condition of the if to its opposite. That is, the condition in the while loop tests when to keep going, whereas the condition in the recursive function tests when to stop.
I may not have explained it very well, but I think I understand it, and I can follow/make traces of the structured examples I've seen
That's cool, if I understood your explanation well, then how you think recursion works is correct at first glance.
Our professor wants us to write recursively at every opportunity, and has made the (technically inaccurate?) statement that all loops can be replaced with recursion
That's not inaccurate. That's the truth. And the inverse is also possible: every time a recursive function is used, that can be rewritten using iteration. It may be hard and unintuitive (like traversing a tree), but it's possible.
how can I recognize that a loop can/cannot be turned into a recursion
Simple:
what is the overall idea/strategy to use when doing the conversion?
There's no such thing, unfortunately. And by that I mean that there's no universal or general "work-it-all-out" method, you have to think specifically for considering each case when solving a particular problem. One thing may be helpful, however. When converting from an iterative algorithm to a recursive one, think about patterns. How long and where exactly is the part that keeps repeating itself with a small difference only?
Also, if you ever want to convert a recursive algorithm to an iterative one, think about that the overwhelmingly popular approach for implementing recursion at hardware level is by using a (call) stack. Except when solving trivially convertible algorithms, such as the beloved factorial or Fibonacci functions, you can always think about how it might look in assembler, and create an explicit stack. Dirty, but works.
for(int i = 0; i < 50; i++)
{
for(int j = 0; j < 60; j++)
{
}
}
Is equal to:
rec1(int i)
{
if(i < 50)
return;
rec2(0);
rec1(i+1);
}
rec2(int j)
{
if(j < 60)
return;
rec2(j + 1);
}
Every loop can be recursive. Trust your professor, he is right!
There's a conditional debugging flag I miss from Matlab: dbstop if infnan described here. If set, this condition will stop code execution when an Inf or NaN is encountered (IIRC, Matlab doesn't have NAs).
How might I achieve this in R in a more efficient manner than testing all objects after every assignment operation?
At the moment, the only ways I see to do this are via hacks like the following:
Manually insert a test after all places where these values might be encountered (e.g. a division, where division by 0 may occur). The testing would be to use is.finite(), described in this Q & A, on every element.
Use body() to modify the code to call a separate function, after each operation or possibly just each assignment, which tests all of the objects (and possibly all objects in all environments).
Modify R's source code (?!?)
Attempt to use tracemem to identify those variables that have changed, and check only these for bad values.
(New - see note 2) Use some kind of call handlers / callbacks to invoke a test function.
The 1st option is what I am doing at present. This is tedious, because I can't guarantee I've checked everything. The 2nd option will test everything, even if an object hasn't been updated. That is a massive waste of time. The 3rd option would involve modifying assignments of NA, NaN, and infinite values (+/- Inf), so that an error is produced. That seems like it's better left to R Core. The 4th option is like the 2nd - I'd need a call to a separate function listing all of the memory locations, just to ID those that have changed, and then check the values; I'm not even sure this will work for all objects, as a program may do an in-place modification, which seems like it would not invoke the duplicate function.
Is there a better approach that I'm missing? Maybe some clever tool by Mark Bravington, Luke Tierney, or something relatively basic - something akin to an options() parameter or a flag when compiling R?
Example code Here is some very simple example code to test with, incorporating the addTaskCallback function proposed by Josh O'Brien. The code isn't interrupted, but an error does occur in the first scenario, while no error occurs in the second case (i.e. badDiv(0,0,FALSE) doesn't abort). I'm still investigating callbacks, as this looks promising.
badDiv <- function(x, y, flag){
z = x / y
if(flag == TRUE){
return(z)
} else {
return(FALSE)
}
}
addTaskCallback(stopOnNaNs)
badDiv(0, 0, TRUE)
addTaskCallback(stopOnNaNs)
badDiv(0, 0, FALSE)
Note 1. I'd be satisfied with a solution for standard R operations, though a lot of my calculations involve objects used via data.table or bigmemory (i.e. disk-based memory mapped matrices). These appear to have somewhat different memory behaviors than standard matrix and data.frame operations.
Note 2. The callbacks idea seems a bit more promising, as this doesn't require me to write functions that mutate R code, e.g. via the body() idea.
Note 3. I don't know whether or not there is some simple way to test the presence of non-finite values, e.g. meta information about objects that indexes where NAs, Infs, etc. are stored in the object, or if these are stored in place. So far, I've tried Simon Urbanek's inspect package, and have not found a way to divine the presence of non-numeric values.
Follow-up: Simon Urbanek has pointed out in a comment that such information is not available as meta information for objects.
Note 4. I'm still testing the ideas presented. Also, as suggested by Simon, testing for the presence of non-finite values should be fastest in C/C++; that should surpass even compiled R code, but I'm open to anything. For large datasets, e.g. on the order of 10-50GB, this should be a substantial savings over copying the data. One may get further improvements via use of multiple cores, but that's a bit more advanced.
The idea sketched below (and its implementation) is very imperfect. I'm hesitant to even suggest it, but: (a) I think it's kind of interesting, even in all of its ugliness; and (b) I can think of situations where it would be useful. Given that it sounds like you are right now manually inserting a check after each computation, I'm hopeful that your situation is one of those.
Mine is a two-step hack. First, I define a function nanDetector() which is designed to detect NaNs in several of the object types that might be returned by your calculations. Then, it using addTaskCallback() to call the function nanDetector() on .Last.value after each top-level task/calculation is completed. When it finds an NaN in one of those returned values, it throws an error, which you can use to avoid any further computations.
Among its shortcomings:
If you do something like setting stop(error = recover), it's hard to tell where the error was triggered, since the error is always thrown from inside of stopOnNaNs().
When it throws an error, stopOnNaNs() is terminated before it can return TRUE. As a consequence, it is removed from the task list, and you'll need to reset with addTaskCallback(stopOnNaNs) it you want to use it again. (See the 'Arguments' section of ?addTaskCallback for more details).
Without further ado, here it is:
# Sketch of a function that tests for NaNs in several types of objects
nanDetector <- function(X) {
# To examine data frames
if(is.data.frame(X)) {
return(any(unlist(sapply(X, is.nan))))
}
# To examine vectors, matrices, or arrays
if(is.numeric(X)) {
return(any(is.nan(X)))
}
# To examine lists, including nested lists
if(is.list(X)) {
return(any(rapply(X, is.nan)))
}
return(FALSE)
}
# Set up the taskCallback
stopOnNaNs <- function(...) {
if(nanDetector(.Last.value)) {stop("NaNs detected!\n")}
return(TRUE)
}
addTaskCallback(stopOnNaNs)
# Try it out
j <- 1:00
y <- rnorm(99)
l <- list(a=1:4, b=list(j=1:4, k=NaN))
# Error in function (...) : NaNs detected!
# Subsequent time consuming code that could be avoided if the
# error thrown above is used to stop its evaluation.
I fear there is no such shortcut. In theory on unix there is SIGFPE that you could trap on, but in practice
there is no standard way to enable FP operations to trap it (even C99 doesn't include a provision for that) - it is highly system-specifc (e.g. feenableexcept on Linux, fp_enable_all on AIX etc.) or requires the use of assembler for your target CPU
FP operations are nowadays often done in vector units like SSE so you can't be even sure that FPU is involved and
R intercepts some operations on things like NaNs, NAs and handles them separately so they won't make it to the FP code
That said, you could hack yourself an R that will catch some exceptions for your platform and CPU if you tried hard enough (disable SSE etc.). It is not something we would consider building into R, but for a special purpose it may be doable.
However, it would still not catch NaN/NA operations unless you change R internal code. In addition, you would have to check every single package you are using since they may be using FP operations in their C code and may also handle NA/NaN separately.
If you are only worried about things like division by zero or over/underflows, the above will work and is probably the closest to something like a solution.
Just checking your results may not be very reliable, because you don't know whether a result is based on some intermediate NaN calculation that changed an aggregated value which may not need to be NaN as well. If you are willing to discard such case, then you could simply walk recursively through your result objects or the workspace. That should not be extremely inefficient, because you only need to worry about REALSXP and not anything else (unless you don't like NAs either - then you'd have more work).
This is an example code that could be used to traverse R object recursively:
static int do_isFinite(SEXP x) {
/* recurse into generic vectors (lists) */
if (TYPEOF(x) == VECSXP) {
int n = LENGTH(x);
for (int i = 0; i < n; i++)
if (!do_isFinite(VECTOR_ELT(x, i))) return 0;
}
/* recurse into pairlists */
if (TYPEOF(x) == LISTSXP) {
while (x != R_NilValue) {
if (!do_isFinite(CAR(x))) return 0;
x = CDR(x);
}
return 1;
}
/* I wouldn't bother with attributes except for S4
where attributes are slots */
if (IS_S4_OBJECT(x) && !do_isFinite(ATTRIB(x))) return 0;
/* check reals */
if (TYPEOF(x) == REALSXP) {
int n = LENGTH(x);
double *d = REAL(x);
for (int i = 0; i < n; i++) if (!R_finite(d[i])) return 0;
}
return 1;
}
SEXP isFinite(SEXP x) { return ScalarLogical(do_isFinite(x)); }
# in R: .Call("isFinite", x)
So I have this function that I'm trying to convert from a recursive algorithm to an iterative algorithm. I'm not even sure if I have the right subproblems but this seems to determined what I need in the correct way, but recursion can't be used you need to use dynamic programming so I need to change it to iterative bottom up or top down dynamic programming.
The basic recursive function looks like this:
Recursion(i,j) {
if(i > j) {
return 0;
}
else {
// This finds the maximum value for all possible
// subproblems and returns that for this problem
for(int x = i; x < j; x++) {
if(some subsection i to x plus recursion(x+1,j) is > current max) {
max = some subsection i to x plus recursion(x+1,j)
}
}
}
}
This is the general idea, but since recursions typically don't have for loops in them I'm not sure exactly how I would convert this to iterative. Does anyone have any ideas?
You have a recursive function that can be summarised as this:
recursive(i, j):
if stopping condition:
return value
loop:
if test current value involving recursive call passes:
set value based on recursive call
return value # this appears to be missing from your example
(I am going to be pretty loose with the pseudo code here, to emphasize the structure of the code rather than the specific implementation)
And you want to flatten it to a purely iterative approach. First it would be good to describe exactly what this involves in the general case, as you seem to be interested in that. Then we can move on to flattening the pseudo code above.
Now flattening a primitive recursive function is quite straightforward. When you are given code that is like:
simple(i):
if i has reached the limit: # stopping condition
return value
# body of method here
return simple(i + 1) # recursive call
You can quickly see that the recursive calls will continue until i reaches the predefined limit. When this happens the value will be returned. The iterative form of this is:
simple_iterative(start):
for (i = start; i < limit; i++):
# body here
return value
This works because the recursive calls form the following call tree:
simple(1)
-> simple(2)
-> simple(3)
...
-> simple(N):
return value
I would describe that call tree as a piece of string. It has a beginning, a middle, and an end. The different calls occur at different points on the string.
A string of calls like that is very like a for loop - all of the work done by the function is passed to the next invocation and the final result of the recursion is just passed back. The for loop version just takes the values that would be passed into the different calls and runs the body code on them.
Simple so far!
Now your method is more complex in two ways:
There are multiple separate statements that make recursive calls
Those statements themselves are within a for loop
So your call tree is something like:
recursive(i, j):
for (v in 1, 2, ... N):
-> first_recursive_call(i + v, j):
-> ... inner calls ...
-> potential second recursive call(i + v, j):
-> ... inner calls ...
As you can see this is not at all like a string. Instead it really is like a tree (or a bush) in that each call results in two more calls. At this point it is actually very hard to turn this back into an entirely iterative function.
This is because of the fundamental relationship between loops and recursion. Any loop can be restated as a recursive call. However not all recursive calls can be transformed into loops.
The class of recursive calls that can be transformed into loops are called primitive recursion. Your function initially appears to have transcended that. If this is the case then you will not be able to transform it into a purely iterative function (short of actually implementing a call stack and similar within your function).
This video explains the difference between primitive recursion and fundamentally recursive types that follow:
https://www.youtube.com/watch?v=i7sm9dzFtEI
I would add that your condition and the value that you assign to max appear to be the same. If this is the case then you can remove one of the recursive calls, allowing your function to become an instance of primitive recursion wrapped in a loop. If you did so then you might be able to flatten it.
well unless there is an issue with the logic not included yet, it should be fine
for & while are ok in recursion
just make sure you return in every case that may occur