Julia: ctrl+c does not interrupt - julia

I'm using REPL inside VScode and trying to fix a code that gets stuck inside a certain package. I want to figure out which process is taking time by looking at the stack trace but cannot interrupt because REPL does not respond to ctrl+c. I pressed ctrl+x by accident and that showed ^X on the screen.
I am using JuMP and GLPK so it could be stuck there. However, I am not seeing any outputs.
I would also appreciate any tips on figuring out which process is causing it to be stuck.

Interrupts are not implemented in GLPK.jl. I've opened an issue: https://github.com/jump-dev/GLPK.jl/issues/171 (but it's unlikely to get fixed quickly).
If you're interested in contributing to JuMP, it's a good issue to get started with. You could look at the Gurobi.jl code for how we handle interrupts there as inspiration.

I started out using GLPK.jl and I also found that it would "hang" on large problems. However, I recommend trying the Cbc.jl solver. It has a time limit parameter which will interrupt the solver in a set number of seconds. I found it to produce quality results. (Or you could use Cbc for Dev/QA testing to determine what might be causing the hanging and switch to GLPK for your production runs.)
You can set the time limit using the seconds parameter as follows.
For newer package versions:
model = Model(optimizer_with_attributes(Cbc.Optimizer
,"seconds" => 60
,"threads" => 4
,"loglevel" => 0
,"ratioGap" => 0.0001))
Or like this for older package versions:
model = Model(with_optimizer(Cbc.Optimizer
,seconds=60
,threads=4
,loglevel=0
,ratioGap=0.0001))

Related

Julia Differential Equations suppress warning of detected instabilities

I have a program that simulates the paths of particles using the Differential Equations package of Julia. The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver (specifically of the EulerHeun solver). However, this leads to warnings like the following:
┌ Warning: Instability detected. Aborting
└ # SciMLBase <path>\.julia\packages\SciMLBase\0s9uL\src\integrator_interface.jl:351
As I simulate thousands of particles, this can be quite annoying (and slow).
Can I suppress this warning? And if not, is there another (better) way to abort the simulation of some particles?
I don't think a code sample makes sense / is necessary here; let me know though if you think otherwise.
https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous
verbose: Toggles whether warnings are thrown when the solver exits early. Defaults to true.
Thus to turn off the warnings, you simply do solve(prob,alg;verbose=false).
The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver
Using a DiscreteCallback or ContinuousCallback with affect!(integrator) = terminate!(integrator) is a much better way to do this.
There is Suppressor.jl, although I don't know whether this reduces the overhead you get from the warnings being created, so a DiffEq-specific setting might be the better way to go here (I don't know much about DiffEq though, sorry!)
Here's an example from the readme:
julia> using Suppressor
julia> #suppress begin
println("This string doesn't get printed!")
#warn("This warning is ignored.")
end
for just suppressing warnings you want #suppress_err

nvprof R gputools code never ends

I am trying to run "nvprof" from command line on R. Here is how I am doing it:
./nvprof --print-gpu-trace --devices 0 --analysis-metrics --export-profile /home/xxxxx/%p R
This gives me a R prompt and i write R code. I can do with Rscript too.
Problem i see is when i give --analysis-metrics option it gives me lots of lines similar to
==44041== Replaying kernel "void ger_kernel(cublasGerParams)"
And R process never ends. I am not sure what I am missing.
nvprof doesn't modify process exit behavior, so I think you're just suffering from slowness because your app invokes a lot of kernels. You have two options to speed this up.
1. Selectively profiling metrics
The --analysis-metrics option enables collection of a number of metrics, which requires kernels to be replayed - collecting a different set of metrics for each kernel run.
If your application has a lot of kernel invocations, this can take time. I'd suggest you query the available metrics with the nvprof --query-metrics command, and then manually choose the metrics you are interested in.
Once you know which metrics you want, you can query them using nvprof -m metric_1,metric_2,.... This way, the application will profile less metrics, hence requiring less replays, and running faster.
2. Selectively profiling kernels
Alternatively, you can only profile a specific kernel using the --kernels <context id/name>:<stream id/name>:<kernel name>:<invocation> option.
For example, nvprof --kernels ::foo:2 --analysis-metrics ./your_cuda_app will profile all analysis metrics for the kernel whose name contains the string foo, and only on its second invocation. This option takes regular expressions, and is quite powerful.
You can mix and match the above two approaches to speed up profiling. You will be able to find more help about these and other nvprof options using the command nvprof --help.

R studio lp solve hanging, how to fix

I am using lp solve in r to solve an optimization problem, but sometimes the function runs into an issue and hangs. Rstudio has the red stop-sign logo that I can click to terminate the program, however for some reason the stop-sign does not break this particular error.
Other than clicking the stop sign, is there any way to terminate the console from running when a function gets stuck? Something that I can do automatically (i.e. if the console is stuck hanging for 10+ seconds, then terminate) would be great.
thanks!
If the problem is, that the optimization is too complex, you can use the function lp.control. Where:
lp.control( "Name of optimization problem", timeout= "number of seconds before termination")
lp.solver will look for all possible answers, so stopping it will yield the best answer found during the timeframe.

julia-client: can't render lazy

Could somebody please explain to me what this message might mean?
I have the Julia client running in Atom, and my code works properly and it gets me the results, but for some line executions(ctrl+enter) the instant eval gives me "julia-client: can't render lazy".
It appears that the behind the scenes the code is executed, but the inline evaluations prefers not to output anything.
The lines corresponding to these messages usually should return a 2 dimensional arrays or dataframes, and in Julia usually the type and the dimensions are printed in the eval, but for some specific lines it can't render.
I could not find similar reports anywhere else.
julia version 0.5.0-rc3
This is a problem with package versions being out of sync. It's you're on the Julia release (v0.5), this will be fixed with a Pkg.update(). In the future, this kind of question is better suited for the Juno discussion board

How can I label my sub-processes for logging when using multicore and doMC in R

I have started using the doMC package for R as the parallel backend for parallelised plyr routines.
The parallelisation itself seems to be working fine (though I have yet to properly benchmark the speedup), my problem is that the logging is now asynchronous and messages from different cores are getting mixed in together. I could created different logfiles for each core, but I think I neater solution is to simply add a different label for each core. I am currently using the log4r package for my logging needs.
I remember when using MPI that each processor got a rank, which was a way of distinguishing each process from one another, so is there a way to do this with doMC? I did have the idea of extracting the PID, but this does seem messy and will change for every iteration.
I am open to ideas though, so any suggestions are welcome.
EDIT (2011-04-08): Going with the suggestion of one answer, I still have the issue of correctly identifying which subprocess I am currently inside, as I would either need separate closures for each log() call so that it writes to the correct file, or I would have a single log() function, but have some logic inside it determining which logfile to append to. In either case, I would still need some way of labelling the current subprocess, but I am not sure how to do this.
Is there an equivalent of the mpi_rank() function in the MPI library?
I think having multiple process write to the same file is a recipe for a disaster (it's just a log though, so maybe "disaster" is a bit strong).
Often times I parallelize work over chromosomes. Here is an example of what I'd do (I've mostly been using foreach/doMC):
foreach(chr=chromosomes, ...) %dopar% {
cat("+++", chr, "+++\n")
## ... some undoubtedly amazing code would then follow ...
}
And it wouldn't be unusual to get output that tramples over each other ... something like (not exactly) this:
+++chr1+++
+++chr2+++
++++chr3++chr4+++
... you get the idea ...
If I were in your shoes, I think I'd split the logs for each process and set their respective filenames to be unique with respect to something happening in that process's loop (like chr in my case above). Collate them later if you must ... ie. map/reduce your log files :-)

Resources