R studio lp solve hanging, how to fix - r

I am using lp solve in r to solve an optimization problem, but sometimes the function runs into an issue and hangs. Rstudio has the red stop-sign logo that I can click to terminate the program, however for some reason the stop-sign does not break this particular error.
Other than clicking the stop sign, is there any way to terminate the console from running when a function gets stuck? Something that I can do automatically (i.e. if the console is stuck hanging for 10+ seconds, then terminate) would be great.
thanks!

If the problem is, that the optimization is too complex, you can use the function lp.control. Where:
lp.control( "Name of optimization problem", timeout= "number of seconds before termination")
lp.solver will look for all possible answers, so stopping it will yield the best answer found during the timeframe.

Related

Julia: ctrl+c does not interrupt

I'm using REPL inside VScode and trying to fix a code that gets stuck inside a certain package. I want to figure out which process is taking time by looking at the stack trace but cannot interrupt because REPL does not respond to ctrl+c. I pressed ctrl+x by accident and that showed ^X on the screen.
I am using JuMP and GLPK so it could be stuck there. However, I am not seeing any outputs.
I would also appreciate any tips on figuring out which process is causing it to be stuck.
Interrupts are not implemented in GLPK.jl. I've opened an issue: https://github.com/jump-dev/GLPK.jl/issues/171 (but it's unlikely to get fixed quickly).
If you're interested in contributing to JuMP, it's a good issue to get started with. You could look at the Gurobi.jl code for how we handle interrupts there as inspiration.
I started out using GLPK.jl and I also found that it would "hang" on large problems. However, I recommend trying the Cbc.jl solver. It has a time limit parameter which will interrupt the solver in a set number of seconds. I found it to produce quality results. (Or you could use Cbc for Dev/QA testing to determine what might be causing the hanging and switch to GLPK for your production runs.)
You can set the time limit using the seconds parameter as follows.
For newer package versions:
model = Model(optimizer_with_attributes(Cbc.Optimizer
,"seconds" => 60
,"threads" => 4
,"loglevel" => 0
,"ratioGap" => 0.0001))
Or like this for older package versions:
model = Model(with_optimizer(Cbc.Optimizer
,seconds=60
,threads=4
,loglevel=0
,ratioGap=0.0001))

julia-client: can't render lazy

Could somebody please explain to me what this message might mean?
I have the Julia client running in Atom, and my code works properly and it gets me the results, but for some line executions(ctrl+enter) the instant eval gives me "julia-client: can't render lazy".
It appears that the behind the scenes the code is executed, but the inline evaluations prefers not to output anything.
The lines corresponding to these messages usually should return a 2 dimensional arrays or dataframes, and in Julia usually the type and the dimensions are printed in the eval, but for some specific lines it can't render.
I could not find similar reports anywhere else.
julia version 0.5.0-rc3
This is a problem with package versions being out of sync. It's you're on the Julia release (v0.5), this will be fixed with a Pkg.update(). In the future, this kind of question is better suited for the Juno discussion board

Debugging options in R

In some coding languages, the cursor stops in debug mode before the error happened in the local environment of the function being run. I am wondering if there is a similar functionality in R.
Currently what I found from researching this matter:
To reproduce that in R, we need to position "browser()" at a strategic location we think of. Then recompile the function we were running by selecting all the lines of the function then hitting CTRL + Enter to compile it then run the code then debug in the function. If browser was improperly positioned due to bad guessing this operation has to be repeated causing significant time loss.
It is very painful.
Another solution I found that is even worse is the use of options(error = recover). If we are going through iterations for example, it will be offer to stop before the loop started instead of offering to jump in the code at the iteration that caused the bug. This feature does not seem to be much more helpful.
(This is too long/formatted for a comment.) I'm not sure what you mean (referring to options(error=recover) by
it will be offer to stop before the loop started instead of offering to jump in the code at the iteration that caused the bug.
Here's an example where the break seems to occur at the iteration that caused the error, as requested:
options(error=recover)
f <- function(x) { for (i in 1:x) if (i==2) stop() }
f(5)
Error in f(5) :
Enter a frame number, or 0 to exit
1: f(5)
Selection: 1
Called from: top level
Browse[1]> print(i)
[1] 2
This is breaking at a specific step in the loop, not (as suggested above) before the loop starts (where i would be undefined).
Can you please give a reproducible example to clarify the difference between the behaviour that happens and what you'd prefer?
For what it's worth, the RStudio front-end offers a slightly more visual debugging experience that you might prefer.

Ignoring errors in R

I'm running a complex but relatively quick simulation in R (takes about 5-10 minutes per simulation) and I'm beginning to run it in parallel with various input values in order to test the robustness of some of my algorithms.
There seems to be one problem: some arrangements of inputs cause a fatal error within the simulation and the whole code comes crashing down, causing the simulations to end. Is there an easy way to catch the error (which may come from a variety of locations) and have it just ignore those input values and move on to the next?
It's frustrating when I set an array of inputs to check that should take 5-6 hours to run through all the simulations and I come back to find that it crashed in the first 45 minutes.
While I work on trying to fix the bug / identify inputs that push me to that error, any ideas on how to ignore / catch the errors as they come?
Thanks
I don't know how did your organize your simulations, but I guess uu have a loop where you check use new arguments at each step.
You can use tryCatch . Here I am throwing an error if I have bad input.
step.simul <- function (x) {
stopifnot(x%%2 == 1, x>0)
(x - 1)/2
}
Then using tryCatch, I flag the bad inputs with a code
that tells you about the bad input:
sapply(1:5, function(i)tryCatch(step.simul(i), error=function(e)-1000-i))
[1] 0 -1002 1 -1004 2
As you see my simulations runs over all the loop index.

How does setTimeLimit work in R?

I am trying to master setTimeLimit() in R and my experience has led to several related questions, so maybe the fundamental question is: how does this really work? (I have been looking at evalWithTimeout() from R.utils as well, and it may suit my purposes slightly better, but it's built on this function.)
Here are the key things I am trying to figure out:
How does it monitor the elapsed time? I.e. it seems to get inserted into the flow control, so how does it do that? Being able to have "background" processes is cool, and could be used for reporting status, checkpointing, and more.
Can I determine how much time remains until it is triggered? I realize I can wrap it and store, somewhere, the elapsed & CPU time consumed at about the point of invocation (i.e. the output of proc.time()). But, this function is already storing these somewhere and I'd like to know where, or at least how to determine the time remaining.
Can it be made to do something useful if the R console is idle? Being able to monitor elapsed.time() and cpu.time() is very useful. I'd like to be able to monitor when R is idle, but it seems from tinkering that it requires a command to be submitted or completed. Moreover, just outputting an error doesn't trigger a subsequent action. (Maybe I need to give more attention to evalWithTimeout.)
The help information says that it can be applicable with C or Fortran, but doesn't give examples. Any suggestions on how this should be done?
To show that setTimeLimit does not work during a C function call:
rfunction <- function(){
repeat{
x <- rnorm(100);
}
}
cfunction <- function(){
x <- eigen(matrix(rnorm(1e6), 1e3));
}
setTimeLimit(3);
system.time(try(rfunction(), silent=TRUE))
system.time(try(cfunction(), silent=TRUE))

Resources