I wonder if there is an equivalent (or something similar) of the launch_configuration
function in CUDA.jl that can be used in the AMDGPU.jl package? Otherwise, do you have any comments on how to know the configuration of the AMD GPU? Thanks.
Related
Like python -d (debug/verbose) spews lots of information at interpreter startup, how can I achieve something similar with Julia? (My specific problem is to find out why, when called from python (pyjulia), Julia 0.6dev can't load boot.jl).
There is no verbose startup mode. You can, however, run julia and/or the python process loading libjulia under gdb. Just spitballing, not being able to find boot.jl sounds like a path problem. I'm also somewhat surprised that libjulia would be trying to find or load boot.jl in the first place, since that file and many others are usually baked into a system image file with a .ji extension (sys.ji previously and inference.ji more recently).
Looking at buying a couple Xeon Phi 5110P, but trying to estimate how much code I have to change or other software needed.
Currently I make good use of R on a multi-core Windows machine (24 cores) by using the foreach package, passing it other packages forecast, glmnet, etc. to do my parallel processing.
Having a Xeon Phi I understand I would want to compile R
https://software.intel.com/en-us/articles/running-r-with-support-for-intel-xeon-phi-coprocessors And I understand this could be done with a trail version of Parallel Studio XE.
Then do I then need to edit R's Makeconf file, adding the C/C++ flags and for the Phi? Compile all the needed packages before the trail on Parallel Studio expires? Or do I not need to edit the Makeconf to get the benefits of foreach on the Phi?
Seems like some of this will be handled automatically once R is compiled, with offloading done by the Math Kernel Library (MKL), but I'm not totally sure of this.
Somewhat related question: Is the Intel Xeon Phi usable without a costly Intel Compiler?
Also revolutionanalytics.com seems to have a few related blog posts but not entirely conclusive for me: http://blog.revolutionanalytics.com/2015/05/behold-the-power-of-parallel.html
If all you need is matrix operations, you can compile it with MKL libraries per here: [Running R with Support for Intel® Xeon Phi™ Coprocessors][1] , which requires the Intel Complier. Microsoft R comes pre compiled with MKL but I was not able to use the auto offload, I had to compile R with the Intel compiler for it to work properly.
You could use the trial version compiler and compile it during the trial period to see if it fits your purpose.
If you want to use things like foreach package by setting up a cluster,since each node is a linux computer, I'm afraid you're out of luck. On page 3 of [R-Admin][1] it says
Cross-building is not possible: installing R builds a minimal version of R and then runs many
R scripts to complete the build.
You have to cross compile from xeon host for xeon phi node with the intel compiler, and it's just not feasible.
The last way to utilize the Phi is to rewrite your code to call it directly. Rcpp provides an easy interface to C and C++ routines. If you found a C routine that runs well on xeon you can call the nodes within your code. I have done this with CUDA and the Rcpp is a thin layer and there are good examples of how to use it, and if you join it with examples of calling the phi card nodes you can probably achieve your goal with less overhead.
BUt, if all you need is matrix ops, there is no quicker route to supercomputing than a good double precision nvidea card and pre loading nvBlas during R startup.
I want to do points-to anlysis in llvm IR. I want it to be path sensitive, which means that when I print out the result, I need append the condition for the "May" Points-to.
I plan to using symbolic execution to achieve this goal.
Are there any tools in llvm, or stand-alone tools to solve the symbolic equation.
Thank you!
Some pointers to get you started:
The Scalar Evolution LLVM module is basically symbolic execution of arithmetic expressions (and taking loops into consideration).
Klee is a full symbolic execution VM for LLVM IR.
Is there a way to customize the SBCL REPL in a way that makes it work similar to the CLISP REPL. The standard SBCL REPL isn't really usable on Mac OS X. I can't use the arrow keys or backspace.
You could use rlwrap
If you have MacPorts installed you can get it with
sudo port install rlwrap
The invoke sbcl with
rlwrap sbcl
Most of the people use SBCL REPL with SLIME. It gives it by far much more features, then readline, that is used in CLISP. If you aren't comfortable with using Emacs, you can try ABLE (available through quicklisp) - a very simple editor, that supports some basic REPL features on par with readline, but as well has basic code highlighting and built-in Hyperspec.
There's vim+slime (slimv) too, for vim users.
You can try linedit which is available via Quicklisp. That said, Emacs+SLIME is a real beast. In fact, Firebug is the only thing close to it that I'm aware of.
Are there any 'standard' plugins for detecting the CPU architecture in scons?
BTW, this question was asked already here in a more general form... just wondering if anyone has already taken the time to incorporate this information into scons.
Using i386 is rather compiler dependant, and won't detect non x86 32 bits archs. Assuming the python interpreter used by scons runs on the CPU you are interested in (not always the case - think cross compilation), you can just use python itself.
import platform
print platform.machine()
print platform.architecture()
If you need something more sophisticated, then maybe you will have to write your own configure function - but it may be better to deal with it in your code directly.
Something like this?
env = Environment()
conf = Configure(env)
if conf.CheckDeclaration("__i386__"):
conf.Define("MY_ARCH", "blahblablah")
env = conf.Finish()