Does anyone have a suggestion if it would be possible to run R scripts on a Synology DS214 NAS? If yes, further information or links would be much appreciated.
The underlying OS on the Diskstation is Linux-based, so in principle it may be possible, but I highly doubt it will be in practice. First of all, you'd need to compile R for the Marvell Armada XP CPU, which is part of the ARM family of CPUs. To compile R on this chip you'll need all the related software described in the R Administration & Installation manual upon which R depends.
Finally, a DiskStation is not at all designed to crunch numbers. Even if you do manage to overcome the potentially unsurmountable problems of being able to compile the software you need on the DS214, the execution of code isn't going to be, well, snappy. Also, the 512MB on-board RAM will make all but small data analysis jobs impossible or impossibly slow.
Related
I'm currently working on an university research related software which uses statistical models in it in order to process some calculations around Item Response Theory. The entire source code was written in Go, whereas it communicates with a Rscript server to run scripts written in R and return the generated results. As expected, the software itself has some dependencies needed to work properly (one of them, as seen before, is to have R/Rscript installed and some of its packages).
Due to the fact I'm new to software development, I can't find a proper way to manage all these dependencies on Windows or Linux (but I'm prioritizing Windows right now). What I was thinking is to have a kind of script which checks if [for example] R is properly installed and, if so, if each used package is also installed. If everything went well, then the software could be installed without further problems.
My question is what's the best way to do anything like that and if it's possible to do the same for other possible dependencies, such as Python, Go and some of its libraries. I'm also open to hear suggestions if installing programming languages locally on the machine isn't the proper way to manage software dependencies, or if there's a most convenient way to do it aside from creating a script.
Sorry if any needed information is missing, I would also like to know.
Thanks in advance
On the right hand side is the result from EC2 instance with 36 cores and 64GB RAM while on the left is my laptop with 8 cores and 8GB RAM.
I'm new to running R on AWS EC2 instance, so probably I need to configure my R in order to make use of the EC2 instance raw compute power.
Could someone please advise on how or is there anything that I miss here?
Thanks!
I found the better answer here.
Quite disappointing with the unnecessary downvotes.
https://github.com/csantill/RPerformanceWBLAS/blob/master/RPerformanceBLAS.md
We frequently hear that R is slow, if your code makes heavy usage of
vector/matrix operations you will see significant performance
improvements.
The precompiled R distribution that is downloaded from CRAN makes use
of the reference BLAS/LAPACK implementation for linear algebra
operations, These implementations are built to be stable and cross
platform compatible but are not optimized for performance. Most R
programmers use these default libraries and are unaware that highly
optimized libraries are available and switching to these can have a
significant perfomance improvement.
I am running R Studio 64bit on a Windows10 laptop with a Nvidia GPU in it, however, when I am running code, specifically Rshiny apps, they take a long time. This laptop has a GPU but my task manager shows that the GPU is not being utilized. Would the GPU make my program run faster? I do not know much about hardware so forgive my ignorance regarding this.
In answer to your question getting a new GPU would have no impact on the speed of your code.
By default most R code is single threaded meaning that it will only use 1 CPU core. There are various ways to do parallel processing (using more than 1 core) in R. And there are also packages that can make use of GPUs. However it sounds like you are not using either of these.
There are various different ways that you could code your application that would make it more efficient. However how you would go about this would be specific to your code. I would suggest you ask a different question regarding.
Also Hadley's excellent book, Advance R, has techniques for profiling and benchmarking your code to increase performance: http://adv-r.had.co.nz/Profiling.html
I have a script in R that uses loess to find a curve of best fit for some data, integrates that curve and then does some simple arithmitic from there to come up with a number. It is for a testing unit which is handheld, and therefore I need to run it on a microcomputer. There are ones out there with Arm processors which will run linux (520MHz Low-power ARM processor, 128 MB of RAM).
My question is will this have enough power to do the calculations, and will R even run on a system like this?
Any suggestions or insight would be greatly appreciated!
Thanks.
You need to have compatible C and FORTRAN compilers as well as the GNU OS and Linux kernel to build R from sources. The lack of compatible compilers is likely to doom this strategy. I suspect you will have better success using for code from loess and simpleLoess as a template for a compiled version in C.
As to your second question, R can be made to run on ARM processors. See, for example, http://maemo.org/packages/view/r-base-core/ which is a port to Maemo which is an operating system on ARM processors for handheld devices. It also can be compiled for Android: http://rwiki.sciviews.org/doku.php?id=getting-started:installation:android.
As #Carl said, the first question can't be answered without knowing more details.
I stumbled on this post depicting an Open Source HW/SW R-Based Graphing Calculator based on this motherboard!
HIH!
I'm in the research phase of my next computer build. I have the idea in my head of running a hypervisor as the base of the system, but i would want to be able to take a shot at programming opencl with one of the OS's installed on the hypervisor...and maybe some gaming. Would i have enough access to the GPU to be able to achieve this effectively, or am i better off installing an OS that i will do development(and gaming) from and then just virtualize any systems on top of that?
what are your recommendations for a hypervisor, vmware, microsoft or other?
sidenote: Recently graduated with a BS in CS, the massive parallel processing seems like a good idea of something to learn, won't be doing any 'real'/major development work. also, i'm aware that CUDA is more mature in it's development, but i'm sticking with opencl for a few reasons, so please don't try to persuade me.
thanks for your input!
dave k.
whats your focus? Virtualisation or OpenCL?
Hak5 did a nice walkthrough of debian based virtualisation environment ProxMox, but I don't know whether it allows virtual hosts hardware access or OpenCL virtualisation.