I want to check to see if Flux is using the GPU or CPU on my computer. Is this possible with a built-in Flux.jl function?
The way that this seems possible is by doing:
julia> using Flux
julia> Flux.use_cuda[]
false
It seems this works since I do not have CUDA working on my Mac and under the hood, Flux checks this condition and then sets the use_cuda flag to false. You can see the source code that does this here: https://github.com/FluxML/Flux.jl/blob/de76e0853db8f0216addaaa4eacf6de39630d834/src/Flux.jl#L58 but it looks something like:
use_cuda[] = CUDA.functional() # Can be overridden after load with `Flux.use_cuda[] = false`
if CUDA.functional()
if !CUDA.has_cudnn()
#warn "CUDA.jl found cuda, but did not find libcudnn. Some functionality will not be available."
end
end
Related
I'm having this weird issue where my target, which interfaces a slightly customized python module (installed with pip install --editable) through reticulate, gives different results when it's being called from an interactive session in R from when targets is being started from the command line directly, even when I make sure the other argument(s) to tar_make are identical (callr_function = NULL, which I use for interactive debugging). The function is deterministic and should be returning the exact same result but isn't.
It's tricky to provide a reproducible example but if truly necessary I'll invest the required time in it. I'd like to get tips on how to debug this and identify the exact issue. I already safeguarded against potential pointer issues; the python object is not getting passed around between different targets/environments (anymore), rather it's immediately used to compute the result of interest. I also checked that the same python version is being used by printing the result of reticulate::pyconfig() to screen. I also verified both approaches are using the same version of the customized module.
Thanks in advance..!
R code can get run in various ways, such as being called via source, loaded from a package, or read in from stdin. I'd like to detect this in order to create files that can work in a multitude of contexts.
Current experimental detector script is here: https://gitlab.com/-/snippets/2268211
Some of the tests are a bit heuristic based on observation and not documentation. For example I'm not sure which of the two tests for running under littler are better:
if(Sys.getenv("R_PACKAGE_NAME") == "littler"){
message("++ R_PACKAGE_NAME suggests running under littler")
mode_found <- TRUE
}
if(Sys.getenv("R_INSTALL_PKG") == "littler"){
message("++ R_INSTALL_PKG suggests running under littler")
mode_found <- TRUE
}
and the test for being loaded from a package is simply seeing if the current environment is a namespace:
if(isNamespace(environment())){
message("++ Being loaded by a package")
mode_found <- TRUE
}
which seems to be true during package load but I suppose could be true in other contexts, for example reading with source with a local argument that is a namespace.
In the end I suspect most of these cases won't matter to my application too much, but it might be useful to someone to have a set - as complete as possible - of detection tests.
So, are the tests in my detector script okay, and how could they be improved?
Let's say we have a symbol array of packages packages::Vector{Symbol} = [...] and we want to create a sys image using PackageCompiler.jl. We could simply use
using PackageCompiler
create_sysimage(packages; incremental = false, sysimage_path = "custom_sys.dll"
but without a precompile_execution_file, this isn't going to be worth it.
Note: sysimage_path = "custom_sys.so" on Linux and "custom_sys.dylib" on macOS...
For the precompile_execution_file, I thought running the test for each package might do it so I did something like this:
precompilation.jl
packages = [...]
#assert typeof(packages) == Vector{Symbol}
import Pkg
m = Module()
try Pkg.test.(Base.require.(m, packages)) catch ; end
The try catch is for when some tests give an error and we don't want it to fail.
Then, executing the following in a shell,
using PackageCompiler
packages = [...]
Pkg.add.(String.(packages))
Pkg.update()
Pkg.build.(String.(packages))
create_sysimage(packages; incremental = false,
sysimage_path = "custom_sys.dll",
precompile_execution_file = "precompilation.jl")
produced a sys image dynamic library which loaded without a problem. When I did using Makie, there was no delay so that part's fine but when I did some plotting with Makie, there still was the first time plot delay so I am guessing the precompilation script didn't do what I thought it would do.
Also when using tab to get suggestions in the repl, it would freeze the first time but I am guessing this is an expected side effect.
There are a few problems with your precompilation.jl script, that make tests throw errors which you don't see because of the try...catch.
But, although running the tests for each package might be a good idea to exercise precompilation, there are deeper reasons why I don't think it can work so simply:
Pkg.test spawns a new process in which tests actually run. I don't think that PackageCompiler can see what happens in this separate process.
To circumvent that, you might want to simply include() every package's test/runtests.jl file. But this is likely to fail too, because of missing test-specific dependencies.
So I would say that, for this to work reliably and systematically for all packages, you'd have to re-implement (or re-use, if you can) some of the internal logic of Pkg.test in order to add all test-specific dependencies to the current environment.
That being said, some packages have ready-to-use precompilation scripts helping to do just this. This is the case for Makie, which suggests in its documentation to use the following file to build system images:
joinpath(pkgdir(Makie), "test", "test_for_precompile.jl")
I work on several R projects at the same time. One of them involves a simulation with a for-loop, which I hope to speed up by using a JIT-compiler. To do so, I added to the file Rcmd_environ in my R-directory/etc the lines following this recommendation.
R_COMPILE_PKGS=TRUE
R_ENABLE_JIT=3
Now I wonder, whether it is possible, to turn this on and off via a script. This way, I wouldn't have JIT-compilation in my other projects. Any ideas?
You can load the compiler library and then set the JIT level via calling the 'enableJIT` function.
For e.g. you can do
require(compiler)
enableJIT(3)
to get full JIT compilation.
Does R provide a similar command for debugging like Matlab's keyboard?
This command provides an interactive shell and can be used in any function.
This gives access to all variables allowing one to verify that the input data is really what it should be (or test why it's not working as expected).
Makes debugging a lot easier (at least in Matlab...).
It sounds like you're looking for browser().
From the description:
A call to ‘browser’ can be included in the body of a function.
When reached, this causes a pause in the execution of the current
expression and allows access to the R interpreter.
It sounds like you're new to debugging in R so you might want to read Hadley's wiki page on debugging.
Have a look at ?recover, this function provides great debugging functionality.