What is the relation between RStudio and RServe? - r

I'm new to R and I decided to put R on a machine I have and see if I can remotely run code that is on my desktop computer.
While searching for "how to do" that, I came across the names "Rserve" and "RStudio". As far as I could tell, RServe is a package (actually, it seems to be the package) which I can use to configure the server, while RStudio is an IDE.
My question is: does RStudio use RServe "under the hood"? And, if it doesn't, then how does RStudio compare to RServe? (I.e., which one is better and why?)
[I figured out that this question could possibly be a duplicate, but I couldn't find any similar question]

Rserve is a client server implemenation written in pure c that starts a server and spawns multiple processes each with it's own R workspace. This is not threads but processes due to R's limitation on multithreading. It uses a QAP packing protocol as it's primary form of transport between the client and the server. You execute commands via the client (PHP, Java, C++) to the server and it returns you REXP objects that are essentially mappings to R's underlying SEXP data objects. Rserve also offers a websockets version that does will can transmit data through websockets but the api is not well documented. It also supports basic authentication through a configuration file.
Rstudio is a C++ and gwt application that provides a web based front end to R. AFAIK it uses json as it's primary transport and supports authentication through pam. Each user has a workspace configured in their home directory. It runs a server very similar but not the same as Rserve to communicate with R using RCPP. It also has it's own plotting driver used to wrap the plot device so that it can pickup the plots to be served to the ui. It has much more functionality such as stepping through your code from the ui and viewing workspace variables.
Functionally they are similar in that they provide a client/server connection to R but IMHO the comparison stops there.

I believe they are separate projects (though I could be wrong). I've never heard of RServe and there does not appear to be any mention of it in the documentation for RStudio. I have used and would recommend RStudio Server. It is relatively easy to set up and super easy to use once it is set up. This is a helpful guide to setting up a server on Amazon EC2:
#Create a user, home directory and set password
sudo useradd rstudio
sudo mkdir /home/rstudio
sudo passwd rstudio
#Enter Password
sudo chmod -R 0777 /home/rstudio
#Update all files from the default state
sudo apt-get update
sudo apt-get upgrade
#Be Able to get R 3.0
sudo add-apt-repository 'deb http://cran.rstudio.com/bin/linux/ubuntu precise/'
#Update files to use CRAN mirror
#Don't worry about error message
sudo apt-get update
#Install latest version of R
#Install without verification
sudo apt-get install r-base
#Install a few background files
sudo apt-get install gdebi-core
sudo apt-get install libapparmor1
#Change to a writeable directory
#Download & Install RStudio Server
cd /tmp
wget http://download2.rstudio.org/rstudio-server-0.97.551-amd64.deb
sudo gdebi rstudio-server-0.97.551-amd64.deb
#Once you’ve installed the above commands, you can now access RStudio through your local browser. Navigate to the Public DNS of your image on port 8787, similar to:
#http://ec2-50-19-18-120.compute-1.amazonaws.com:8787

The earlier answer about 3 years old provide old information, such as here.
Updated correction
RStudio is a firm that provides the open source RStudio IDE for R. They also sell commercial services such as RStudio Server Pro that markets itself with load balancing and related things. Apparently, the successuful open source project has lead the way to markets.

You may also mean Microsoft R Server, which is now called Microsoft Machine Learning Server?
There is also RServer by RStudio.
Anyway how to install both can be found here.

Related

Error while opening Rstudio throught Terminal

I am working with Unix and from now on I need to use the University server to run some packages in R.
1.I assessed the server by ssh
2.I downloaded miniconda
3.source ~/.bashrc
4.Downloaded Rstudio
5. conda install -c r rstudio
And when I am trying to open Rstudio I am receiving the following error
'QXcbConnection: Could not connect to display'
Any help will be super useful.
Depending you your platform and your university's security policy this may or may not work
https://unix.stackexchange.com/questions/12755/how-to-forward-x-over-ssh-to-run-graphics-applications-remotely
In short you need to explicitly enable X11 forwarding (though I was under the impression that it is on by default). Also you need to run an X11 server, there are a few on windows and MacOS. If the above does not help, it is most likely due to a security policy and you may want to speak to your university's SysAdmin

Instaling cross-compiled debian packages to fake "footfs" with dpkg

The setup I have is like this: I have two sets of libraries that are compiled for amd64 (pc) and armelx (ARM). They are both used to cross-compile some software on a build machine.
The first ones (amd64) can be updated without hassle by updating the apt-repository and using apt-get install on the build machine. The packages for ARM however, I don't want to install with apt, because it does not support installing to different directory. If I installed to default directories, the versions could not coexist. Right?
So far, the build machine was updated manually each time there was a new version of the packages, simply by extracting with dpkg -x to a dedicated "fake" footfs directory. This is where the compiler would also look when cross compiling other SW. The problem is, there is no information about these extracted packages or their versions anywhere on the system, right? It should have been in the status file.
My thought was to have these packages installed on this footfs dir with dpkg -i <package.deb> --root=<rootfs>. Would this work? I have a feeling it will not, because the deb packages have no post/pre-remove/install scripts, so it may work for a virgin install somehow, but not for upgrading? Also, what must the rootfs directory structure look like and what must it contain in order for this to work even the first time? Is there a tool to help with this?
Thanks.
Once you have a base armel Debian system, you can actually enter it and run the armel code inside it using something like QEMU. The qemu-arm-static tool (in the qemu-user-static package) can make use of the binfmt_misc capability in Linux to make it so ARM executables are directly run under the QEMU ARM system emulator. So you can run dpkg, apt-get, and so on inside the armel "rootfs" while running on amd64 hardware.
Example:
my_arm_system=/mnt/arm_system
sudo cp /usr/bin/qemu-arm-static "$my_arm_system/usr/bin/"
sudo chroot "$my_arm_system" apt-get update
sudo chroot "$my_arm_system" apt-get install $somepkg
sudo chroot "$my_arm_system" /bin/bash
As for setting up the base armel system in the first place: Debootstrap is the typical method for setting up a Debian base system, whether in a chroot or otherwise. You can use it for installing a base system of a different architecture, but it takes a few extra steps:
distro=jessie # or whatever
echo "Debootstrap phase 1"
sudo mkdir "$my_arm_system"
sudo debootstrap --arch=armel --verbose --foreign "$distro" "$my_arm_system"
sudo cp /usr/bin/qemu-arm-static "$my_arm_system"/usr/bin/
echo "Debootstrap phase 2"
sudo chroot "$my_arm_system" /debootstrap/debootstrap --second-stage
Multistrap is another tool that might be useful; it is intended for setting up Debian environments of one architecture on a host of a different architecture, or for using more complicated APT source combinations. It's not perfect, as it doesn't follow all the deb installation "rules" exactly. It takes some shortcuts/deviations in order to make its job reasonably possible.

s3cmd: how to use server side encryption?

I'm trying to encrypt some files on Amazon S3 using server side encryption. According to this link
http://s3tools.org/kb/item9.htm
I should only add this flag
--server-side-encryption
on the put or sync command I'm trying to run, but when I do that I get a "s3cmd: error: no such option: --server-side-encryption" message.
How do I run this command to use server side encryption?
s3cmd put file.zip s3://test/file.zip
I'm using ubuntu 14.04 server 64 bits.
You need a more recent version of s3cmd than what is in the ubuntu repositories. Use github.com/s3tools/s3cmd master branch (preferred), or the copy in the Debian experimental repository.
If you've upgraded- make sure you don't have any remnants of the old version. I had this issue because I had installed the first package via the system package handler but when I upgraded I had installed via python. This left me with the impression that I had upgraded- but had not removed the old version.
I discovered this because
dpkg -l s3*
Still lists v 1.1 while
pip list | grep s3
Shows 1.6.1
I fixed the issue by uninstalling the old package using the system package handler.
dpkg -r s3*
Then when the cron job ran, it ran the python package version 1.6.1 and no errors occurred.

R - How to set the path of install.packages() for shiny server ? - Ubuntu

For my system: Ubuntu 12.04 and R 3.03, whenever I install a custom package in R via
>install.packages()
the package is installed by default to
/home/USER/R/x86_64-pc-linus-gnu-library/3.0/
as opposed to system-wide in
/usr/local/lib/R/site-library/
which is needed for shiny-server to work with that package.
My temporary solution is to copy the packages to the correct folder after the fact.
Question: How can I set the default install path from the start to avoid this problem?
Yes -- I consider this to be a misfeature and disable my per-user directory.
Moreover, I mostly use a script install.r (of which a version is an example in the littler package you can install as part of Ubuntu) which simple explicitly set the /usr/local/lib/R/site-library directory as the default. With a patch we got into R 3.0.2 or 3.0.3, normal user can write into the directory and will now create group-writeable directories so other users can update and overwrite -- just make everybody a member of the same group, say staff or admin. And then you don't even need sudo or root.
I have essentially answered this same question a few times here over the years (minus the shiny angle, which is not really relevant) so feel free to search for the other for more details, examples, ...
I would propose a different approach.
The problem is that shiny-server cannot find the packages that you install because it runs them as a different user which is called shiny. This user is created upon installation of shiny-server
The easiest (and safest IMHO) way to solve this is to just install the packages as the shiny user, using the following steps.
Set a password for the user using sudo passwd shiny, now enter and confirm a password of your choosing.
Switch to the shiny account using: su - shiny
Call up R using $ R (without sudo)
Install the required packages, in this case: `install.packages("shinydashboard")
Note that if you have rstudio-server installed on the same machine then you can perform steps 2-4 using that interface. Simply go the same domain/ip and use :8787 for the rstudio-server interface instead of :3838 for shiny-server.
Adapted from my answer here

installing R packages on ubuntu 8.10

preface: i'm an os x user coming to linux, so excuse my ignorance in advance
I've installed R using synaptic and now i'm trying to install packages.
I open R then try
install.packages("some_package")
system tries to default to /site-library, then tells me it's not writable, then asks about making a personal library?
Should I just make site-library writable? Or is there something more to this?
The directory /usr/share/local/lib/R is the default location; the directory is has ownership root:staff by default. If you add yourself to group staff (easiest: by editing /etc/group and /etc/gshadow) you can write there and you do not need sudo powers for the installation of packages. That is what I do.
Alternatively, do apt-get install littler and copy the example file /usr/share/doc/littler/examples/install.r to /usr/local/bin and chmod 755 it. The you can just do sudo install.r lattice ggplot2 to take two popular examples.
BTW Ubuntu 8.1 does not exist as a version. Maybe you meant 8.10? Consider upgrading to 9.10 ...
Edit: Also have a look at this recent SO question.
I faced the same issue. The most convenient way is to start R as super user.
sudo R
After that, install.packages("some package") should work.
If you are the only user who needs those packages, then the easiest and neatest way is to let R create a personal library for you. That way you don't need to mess with the system directories managed by the package management system.
Another way to install some packages in Ubuntu is to look for Ubuntu packages with names like r-cran-*. This way you do not have to worry about dependencies, the packages become available to all users, and updates are taken care of by the Ubuntu package management system. But only a small proportion of CRAN packages are available this way and you may not get the latest version.
Well, I prefer to install packages into local R folder ~/R/, but it's just a matter of an individual preference... you can also grant yourself a write permission to default library, it doesn't make any difference.
Be sure to add up-to-date packages. Those packages available in default repos are quite old. R v.2.9.0 is available by default in 9.10, while v.2.10.1 is now available.
So stay up-to-date, add this line to file /etc/apt/sources.list (replace <text> with CRAN server address, you can find server addresses on www.r-project.org > CRAN > Linux > Ubuntu):
deb http://<my.favorite.cran.mirror>/bin/linux/ubuntu karmic/
then run this line in terminal:
gpg --keyserver subkeys.pgp.net --recv-key E2A11821 && gpg -a --export E2A11821 | sudo apt-key add -
and if keys are imported properly, run:
sudo apt-get install r-base-core
or if you already installed R, run:
sudo apt-get update && sudo apt-get upgrade
you should also check for alias functions (try man alias in terminal) to automatize repetitive tasks... feel comfortable in terminal, Synaptic is indeed a good tool, but most Linux users prefer command-line approach for a good reason - it's highly customizable =)
I recommend that you stick with one server (be advised when choosing the default server - I prefer UCLA's server, Berkeley works just fine, Main server is usually busy as hell... so there...)
Alternatively, you can add default CRAN server to .First() function:
# replace '<server address>'
.First() <- function() {
options("repos" = c(CRAN = "<my.favorite.cran.mirror>"))
}
now you can just type:
> install.packages('<somepackage>')
and you'll lose the boring Tcl/Tk serverlist window! Oh, what a relief!
Welcome to Ubuntu!
Cheers, mate!

Resources