What is the difference between Julia and Julia Pro offered by Julia Computing?
Does Julia Pro have any enterprise library which isn't available in Julia?
As you can read in the project description there are a few optional packages you can install on top of the "free version" (mostly in the area of Excel integration and business workflow), but the main "difference" is in the installation process, expecially in Windows or Mac:
With standard Julia you need three steps: install Julia itself, install an editor (e.g. Juno/Atom or VScode with the Julia extension), add the desired packages.
With JuliaPro, you have these three steps by just clicking an installer.
Julia Pro is a all in one simple solution as Anaconda for Python.
Related
I have some complicated code in R that uses the neuralnet library for some computations.
Sorrily, I'm new to R and I'have less than a week to obtain some results using the existing code, which take quite a while in the processors I have to my disposal.
My idea is to implement the code via microsoft R open (MRO), which could accelerate the computations, but I haven't been able to install the neuralnet library via anaconda (I prefer anaconda because it's simple and allows to create environments easily). The installation goes with "Solving environment" forever.
Is there a way to install these libraries to be compatible with MRO on anaconda?? should i desist of using anaconda for this task?
I want to convert certain R packages (which have been installed under windows) for linux usage, then I can simply upload these R package to the linux server, and therefore it is not necessary to install these R packages again under linux environment.
I wonder that is it doable?
Most likely not. Many are precompiled binaries for windows, but need to be compiled in linux. Especially those packages that contain compiled code (such as C or C++), these need to be compiled on the target platform so that they are linked to that platform's libraries.
If the issue is the time it takes to maintain a set of packages, one thing you can do is create your own utilities package, which imports all of the packages you would want. Then, if you install your utilities package, it will automatically install all of the others.
I want to deploy my app cross platform for linux i am confused,
Should i build separate packages for Redhat and Debian or use an installer script like Qt Installer Framework
It depends on your exact desire, but ideally, you could do both.
Rpms, debs, and similar packages for other distributions are easier to obtain and install.
However, sometimes it takes the distrubtion a while to get your software into their package repositories. In those cases, having an installer like Qt does comes handy.
Say I have two R installations. Same version but one built for Ubuntu Linux (locally) with memory profiling and the other without. Do I need to compile the installed packages for each separately?
Short answer is 'Nope' as packages are unaffected by this optional feature in the R engine.
If you have particular questions concerning R use on Debian and Ubuntu, come to the r-sig-debian list.
I am developing a framework for reproducible computing with R. One problem that I am struggling with is that some R code might run perfectly in version X.Y-Z of a package, but then why you try to reproduce it 3 years later, the packages have updated, some functions are changed, and the code doesn't run anymore. This problem affects also for example Sweave documents that use packages.
The only way to confidently reproduce the results is by installing the R version and version of the packages that were used by the original author. If this was a single case, one could pull stuff from the CRAN archives and install appropriate versions. But for my framework this is impractical, and I need to have the package versions preinstalled.
Assume for now that I restrict myself to a single version of R, e.g. 2.14. What would be a practical way to install many versions of R packages, so that I can load them on the fly? I suppose I can do something like creating separate library directories for every version of every package and then using custom lib.loc arguments while loading them. This is going to be messy though. Any tips or previous attempts to do something similar?
My framework runs on Ubuntu server.
You could install packages with versions (e.g. rename to foo_1.0 directory instead of foo) and softlink the versions you want to re-create a given R + packages snapshot into one library. Obviously, the packages could actually live in a separate tree, so you could have library.projectX/foo -> library.all/foo/1.0.
The operating system gives you even more handles for complete separation, and the Debian / Ubuntu stack as a ton of those available. Two I have played with are
chroot environments: We use this to complete separate build environments from host machines. For example, all Debian uploads I produced are built in a i386 pbuilder chroot hosted on my amd64 Ubuntu server. Chroot is a very powerful Unix system call. Chroots, and particularly the pbuilder system built on top of it (for Debian package building) are meant to operate headless.
Virtual machines: This gives you full generality. My not-so-powerful box easily handles three virtual machines: Debian i386, Ubuntu i386 as well as Windoze XP. For this, I currently use KVM along with libvirt; this is Linux specific. I have also used VirtualBox and VMware in the past.
I would try to modify the DESCRIPTION file, and change the field "Package" there by adding the version number.
For example, you download the package source a from CRAN page (http://cran.r-project.org/web/packages/pls/). Unpack the compressed file (pls_2.3-0.zip) to a directory ("pls/"). The following steps are to change the package name in DESCRIPTION ("pls/DESCRIPTION") and installation with R command 'R CMD INSTALL pls/', where 'pls/' is a path to the package source with modified DESCRIPTION file.
Playing with R library paths seems a dangerous thing to me.