I found two ways to run a shiny application in the background:
The first one
path_aux = "R -e \"shiny::runApp('inst/app.R', launch.browser = TRUE)\""
system(path_aux, wait = FALSE)
The issue
Seems like this alternative runs a different version of my shiny app, I mean, I have a variable called fileName that before run my app I run this line:
fileName <- "OpenTree"
But when I run
path_aux = "R -e \"shiny::runApp('inst/app.R', launch.browser = TRUE)\""
system(path_aux, wait = FALSE)
My variable fileName have another value, always the same, which I don't know where it is.
Second Alternative
I test with other alternative
rstudioapi::jobRunScript(path = "inst/shiny-run.R")```
shiny-run.R
shiny::runApp(appDir = "inst/app.R",port = 3522)
This alternative works fine
I would like to use the first one because maybe I want to have multiple windows running my app that is a package I just want to know where is this value "fileName" comes from. I don't know if system take a snapshot or something like that.
Thank you
Since I recommended it in a comment, I'll formalize it a little here: Docker. In hindsight, this answer turned into much more of a tutorial/howto than intended. I hope it isn't daunting; the steps are actually quite straight-forward, where once you have docker available, you might only need to run something similar to:
$ docker run --rm -d -p 3838:3838 \
-v c:/Users/r2/R/win-library/4.0/shiny/examples/:/srv/shiny-server \
-v c:/Users/r2/shinylogs/:/var/log/shiny-server/ \
-v c:/Users/r2/R/otherlibrary/:/mylibrary/ \
rocker/shiny-verse:4.0.3
Very little of the below is R code, it is all in a shell/terminal. I am testing it on win10 using git-bash, but this should work on macos or linux with very little modification.
Table-of-Contents
Install docker if not installed.
Pull the shiny-server image.
Start the shiny-server container (with app-dir mount).
Browse to the running app. Update your apps, changes take effect immediately.
Stop the server (when done for the day).
And then two sections on Logs and some ways to deal with Other Packages.
Content
If not already available, make sure the docker is available:
$ docker -v
Docker version 20.10.2, build 2291f61
If not found, then Get Docker is the best resource for installing it.
Pull one of the available images. I'm demonstrating using rocker/shiny-verse which includes shiny and all of the tidyverse dependencies (image size is 1.95GB), but there is also rocker/shiny that is slightly smaller (1.56GB). (The docs are a bit more detailed at the second link.)
I strongly recommend pulling a specific version instead of :latest (or no version), since you likely need to closely mimic the dev-environment on your laptop. This running container will not use your local R, it has its own. So it is possible that you only have R-3.6 installed on your host computer and use R-4.0.3 within the container. Use the same (major.minor) version you're testing/developing on.
$ docker pull rocker/shiny-verse:4.0.3
Using default tag: latest
latest: Pulling from rocker/shiny-verse
a4a2a29f9ba4: Already exists
127c9761dcba: Already exists
d13bf203e905: Already exists
4039240d2e0b: Already exists
fffc4b622efe: Pull complete
c265253654a5: Pull complete
e0161c6ad391: Pull complete
8e7558fa9ec5: Pull complete
Digest: sha256:ce760db38a4712a581aa6653cf3a6562ddea9b48d781aad4f9582b8056401317
Status: Downloaded newer image for rocker/shiny-verse:4.0.3
docker.io/rocker/shiny-verse:4.0.3
Determine the (host) directory to "mount". I'm going to use the examples installed with R's shiny package, but you can use anything. There are two options:
Single App
Mount: the app directory itself, e.g., c:/Users/r2/R/win-library/4.0/shiny/examples/01_hello/
Use: http://localhost:3838/ will run that one app.
Directory of Multiple Apps
Mount: a directory with other app-directories within it, e.g., c:/Users/r2/R/win-library/4.0/shiny/examples/
Use: http://localhost:3838/ will show the directory of apps, and http://localhost:3838/01_hello/ will run the first app.
In your case, you might choose something like /path/to/your/package/inst/ and opt for the single-app option from above.
Per the docs at rocker/shiny, we'll mount this on /srv/shiny-server/.
$ docker run --rm -d -p 3838:3838 \
-v c:/Users/r2/R/win-library/4.0/shiny/examples/:/srv/shiny-server \
rocker/shiny-verse:4.0.3
ba4654d541ef39fb4882364446ece0b516e518baf946c21d1857565f05acd2c5
(FYI: the -p 3838:3838 is needed to "expose" the server outside of the running shiny server "container". The first number is what is visible to your computer, the second number must remain 3838 and is what the server is actually running internally. If you don't include this, you won't see it.)
Point your browser to http://localhost:3838 (or whatever port you assigned above) and you should see the app (if single-app) or the directory of apps.
When you are done, kill (stop) the container:
$ docker stop exciting_gagarin
Logs
There will come a time when you want/need to look at logs (e.g., warnings/errors). There are two types of logs:
server logs, accessible by running
docker logs -f exciting_gagarin
(Ctrl-C to stop the logs, server keeps running.)
app logs (per session). For this, you can jump into the running container and look at the logs, but these logs will be deleted when the container is stopped (because files inside docker containers are by default ephemeral) ... and this can be inconvenient. Instead, you can mount the app-logs to a local directory by including this volume-mount directive to the docker run command:
-v c:/Users/r2/shinylogs/:/var/log/shiny-server/
When an app is running and a warning/error occurs, you should see files like:
$ ls -l c:/Users/r2/shinylogs/
total 5
-rw-r--r-- 1 r2 197121 112 Jan 8 08:38 shiny-server-shiny-20210108-133524-43293.log
-rw-r--r-- 1 r2 197121 237 Jan 8 08:53 shiny-server-shiny-20210108-135551-35903.log
-rw-r--r-- 1 r2 197121 271 Jan 8 08:54 shiny-server-shiny-20210108-135610-44445.log
-rw-r--r-- 1 r2 197121 271 Jan 8 08:54 shiny-server-shiny-20210108-135612-40369.log
-rw-r--r-- 1 r2 197121 145 Jan 8 08:57 shiny-server-shiny-20210108-140001-43857.log
Other Packages
It is possible that you have packages that are not included in rocker/shiny or rocker/shiny-verse. Fear not! Follow these instructions:
Create a local (not in-container) directory where you will install these missing packages. I produce a separate folder because it is possible that the host OS is different from the in-container OS (debian), in which case packages may not be in the right format. I'll use c:/Users/r2/R/otherlibrary. I will not use this path in the host OS "R" instance.
Add a volume-mount to your command that makes this available inside the container:
-v c:/Users/r2/R/otherlibrary/:/mylibrary/
If you have a container running, you'll need to docker stop it and then rerun it with this addition.
In your app, add
.libPaths( c("/mylibrary", .libPaths()) )
Two things about this: (1) it is required for R to look in the alternate directory; and (2) this will do no harm if run on the host and "/mylibrary" does not exist. You can optionally include a conditional of if (dir.exists("/mylibrary"))... if you like.
Install the additional CRAN packages.
I see many apps that are built-in to install packages as needed, such as
if (!require("ggrepel")) install.packages("ggrepel")
This isn't wrong (and is the correct use of require versus library), but it isn't my preferred way of doing things.
An alternative is to install these needed packages manually, which we can do by entering R within the container and install to the appropriate library-path.
$ docker exec -it exciting_gagarin R
R version 4.0.3 (2020-10-10) -- "Bunny-Wunnies Freak Out"
Copyright (C) 2020 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> .libPaths( c("/mylibrary", .libPaths()) )
> .libPaths()
[1] "/mylibrary" "/usr/local/lib/R/site-library"
[3] "/usr/local/lib/R/library"
> install.packages("ggrepel")
Installing package into ‘/mylibrary’
(as ‘lib’ is unspecified)
trying URL 'https://packagemanager.rstudio.com/all/__linux__/focal/latest/src/contrib/ggrepel_0.9.0.tar.gz'
Content type 'binary/octet-stream' length 980180 bytes (957 KB)
==================================================
downloaded 957 KB
* installing *binary* package ‘ggrepel’ ...
* DONE (ggrepel)
The downloaded source packages are in
‘/tmp/RtmpgPG8YS/downloaded_packages’
> q("no")
Non-CRAN packages. I realize that you're working on a local package. Both of the referenced images contain the devtools package, so with an extra volume-mount, you can use devtools::install, devtools::load_all, or if you've built the source package already, you can install.packages("/mylibrary/mypackage.tar.gz").
More detail on why it might be necessary to have a different library of packages: compiled code. For instance, when I install ggrepel (one example) on windows, among the files are
ggrepel/libs/i386/ggrepel.dll
ggrepel/libs/x64/ggrepel.dll
which are compiled libraries specific to the Windows OS. When I install the same package on linux, I do not have those libraries, instead I see
ggrepel/libs/ggrepel.so
Why do I say linux? Because these rocker/shiny* images are running linux under the hood ... even if your host OS is windows or macos.
The file formats are incompatible, so renaming doesn't work. Many packages might work out-of-the-box (have not tested), but if you try to mount your host-OS R library path into the container and see errors, think of this discussion.
Notes on docker
(In case you are not familiar.)
In general, everything in a running docker container is ephemeral, meaning logs and saved files are gone when the container is stopped. The way around this is to use volume-mounts as we have done above. In this case, if the app saves a file to its own directory, then you will see it in the host OS under /path/to/mypackage/inst/....
Depending on your OS configuration, this is visible only on the local host, so either http://localhost:3838 or http://127.0.0.1:3838 (which are not always identical, e.g. windows sometimes). If you need it exposed to other computers on your network ... then I recommend you do a little research on this. Running servers in docker and exposing it to the rest of the network has risks and consequences.
Related
I just installed OpenBSD 6.9 to study how it works.
I wanted to get the most minimal config possible, because I want to use it as a server.
During instalation I chose the option to not install Xserver, but I still have the /usr/X11R6 and /etc/X11 directories with X config and commands like startx. The only difference is that now, startx doesn't work. I tried installing on VirtualBox and on bare metal and both were the same.
What do I have to do in order to completely remove X from OpenBSD? And why is it still being installed in my machine even if I explicitly write "no" when prompted during installation?
My system:
OpenBSD 6.9
Intel Pentium G5400
Nvidia 1050 ti.
OpenBSD installation uses different file sets => see OpenBSD FAQ / File Sets
X11 installation is split into 4 file sets :
xbase71.tgz : Base libraries and utilities for X11 (requires xshare71.tgz)
xfont71.tgz : Fonts used by X11
xserv71.tgz : X11's X servers
xshare71.tgz : X11's man pages, locale settings and includes
During installation, you chose not to install xserv71.tgz (X servers) but you still have installed xbase71.tgz (startx command and others directories).
If you want to completely remove X from OpenBSD, during installation, remove every file set for X. But you should keep xbase71.tgz because some programs needs it to run correctly even if it's a non-X program.
I'm not a OpenBSD developer, so I cannot give a clear answer. But some specific packages which you can add with the package command from OpenBSD (pkg_add), needs some X libraries or binaries.
As example when you want to add vim for the first time, then you have eight flavors:
$ pkg_info -d vim-8.2.3456-no_x11
Information for inst:vim-8.2.3456-no_x11
[REMOVED]
Flavors:
gtk2 - build using the Gtk+2 toolkit
gtk3 - build using the Gtk+3 toolkit (default)
no_x11 - build without X11 support
lua - build with Lua support
perl - build with Perl support
python - build with Python support
python3 - build with Python3 support
ruby - build with Ruby support
It's depends on the packages what you need. Also when you want install something from the port collections.
You can try the quick and dirty way and simple remove your mentioned directories. But I could be possible that some programs from the base system no longer works, because of missing dependencies.
I've followed the instructions to install the stable branch of Virtuoso Open Source 7 on Ubuntu 16.04. There don't appear to be any errors throughout the process of —
./autogen.sh
CFLAGS="-O2 -m64"
export CFLAGS
./configure
make
make install
However, when I go to /usr/local/virtuoso-opensource/var/lib/virtuoso/db (which contains only virtuoso.ini) and run —
virtuoso-t -f &
The first time I do this the terminal just vanishes. When I reopen the terminal and run the same again it just reads The program 'virtuoso-t' is currently not installed. You can install it by typing: apt install virtuoso-opensource-6.1-bin.
I've tried installing both 7 stable and develop from github and both produce the same result. I'd rather use 7 but tried installing 6 via the ubuntu package and conductor wouldn't work for me - not having much luck all round, one of those days.
Thanks for assistance you can provide.
Sounds like you didn't adjust your $PATH variable after make install.
$PATH should include the path to the directory which contains the virtuoso-t, or you can include that path in the launch command, e.g. —
/path/to/virtuoso-t -f -c /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso.ini &
(Note that the develop/7 branch is recommended over stable/7 at the moment, due to the number of fixes there.)
For my system: Ubuntu 12.04 and R 3.03, whenever I install a custom package in R via
>install.packages()
the package is installed by default to
/home/USER/R/x86_64-pc-linus-gnu-library/3.0/
as opposed to system-wide in
/usr/local/lib/R/site-library/
which is needed for shiny-server to work with that package.
My temporary solution is to copy the packages to the correct folder after the fact.
Question: How can I set the default install path from the start to avoid this problem?
Yes -- I consider this to be a misfeature and disable my per-user directory.
Moreover, I mostly use a script install.r (of which a version is an example in the littler package you can install as part of Ubuntu) which simple explicitly set the /usr/local/lib/R/site-library directory as the default. With a patch we got into R 3.0.2 or 3.0.3, normal user can write into the directory and will now create group-writeable directories so other users can update and overwrite -- just make everybody a member of the same group, say staff or admin. And then you don't even need sudo or root.
I have essentially answered this same question a few times here over the years (minus the shiny angle, which is not really relevant) so feel free to search for the other for more details, examples, ...
I would propose a different approach.
The problem is that shiny-server cannot find the packages that you install because it runs them as a different user which is called shiny. This user is created upon installation of shiny-server
The easiest (and safest IMHO) way to solve this is to just install the packages as the shiny user, using the following steps.
Set a password for the user using sudo passwd shiny, now enter and confirm a password of your choosing.
Switch to the shiny account using: su - shiny
Call up R using $ R (without sudo)
Install the required packages, in this case: `install.packages("shinydashboard")
Note that if you have rstudio-server installed on the same machine then you can perform steps 2-4 using that interface. Simply go the same domain/ip and use :8787 for the rstudio-server interface instead of :3838 for shiny-server.
Adapted from my answer here
I'm new to R and I decided to put R on a machine I have and see if I can remotely run code that is on my desktop computer.
While searching for "how to do" that, I came across the names "Rserve" and "RStudio". As far as I could tell, RServe is a package (actually, it seems to be the package) which I can use to configure the server, while RStudio is an IDE.
My question is: does RStudio use RServe "under the hood"? And, if it doesn't, then how does RStudio compare to RServe? (I.e., which one is better and why?)
[I figured out that this question could possibly be a duplicate, but I couldn't find any similar question]
Rserve is a client server implemenation written in pure c that starts a server and spawns multiple processes each with it's own R workspace. This is not threads but processes due to R's limitation on multithreading. It uses a QAP packing protocol as it's primary form of transport between the client and the server. You execute commands via the client (PHP, Java, C++) to the server and it returns you REXP objects that are essentially mappings to R's underlying SEXP data objects. Rserve also offers a websockets version that does will can transmit data through websockets but the api is not well documented. It also supports basic authentication through a configuration file.
Rstudio is a C++ and gwt application that provides a web based front end to R. AFAIK it uses json as it's primary transport and supports authentication through pam. Each user has a workspace configured in their home directory. It runs a server very similar but not the same as Rserve to communicate with R using RCPP. It also has it's own plotting driver used to wrap the plot device so that it can pickup the plots to be served to the ui. It has much more functionality such as stepping through your code from the ui and viewing workspace variables.
Functionally they are similar in that they provide a client/server connection to R but IMHO the comparison stops there.
I believe they are separate projects (though I could be wrong). I've never heard of RServe and there does not appear to be any mention of it in the documentation for RStudio. I have used and would recommend RStudio Server. It is relatively easy to set up and super easy to use once it is set up. This is a helpful guide to setting up a server on Amazon EC2:
#Create a user, home directory and set password
sudo useradd rstudio
sudo mkdir /home/rstudio
sudo passwd rstudio
#Enter Password
sudo chmod -R 0777 /home/rstudio
#Update all files from the default state
sudo apt-get update
sudo apt-get upgrade
#Be Able to get R 3.0
sudo add-apt-repository 'deb http://cran.rstudio.com/bin/linux/ubuntu precise/'
#Update files to use CRAN mirror
#Don't worry about error message
sudo apt-get update
#Install latest version of R
#Install without verification
sudo apt-get install r-base
#Install a few background files
sudo apt-get install gdebi-core
sudo apt-get install libapparmor1
#Change to a writeable directory
#Download & Install RStudio Server
cd /tmp
wget http://download2.rstudio.org/rstudio-server-0.97.551-amd64.deb
sudo gdebi rstudio-server-0.97.551-amd64.deb
#Once you’ve installed the above commands, you can now access RStudio through your local browser. Navigate to the Public DNS of your image on port 8787, similar to:
#http://ec2-50-19-18-120.compute-1.amazonaws.com:8787
The earlier answer about 3 years old provide old information, such as here.
Updated correction
RStudio is a firm that provides the open source RStudio IDE for R. They also sell commercial services such as RStudio Server Pro that markets itself with load balancing and related things. Apparently, the successuful open source project has lead the way to markets.
You may also mean Microsoft R Server, which is now called Microsoft Machine Learning Server?
There is also RServer by RStudio.
Anyway how to install both can be found here.
I am writing a protocol for a reproducible analysis using an in-house package "MyPKG". Each user will supply their own input files; other than the inputs, the analyses should be run under the same conditions. (e.g. so that we can infer that different results are due to different input files).
MyPKG is under development, so library(MyPKG) will load whichever was the last version that the user compiled in their local library. It will also load any dependencies found in their local libraries.
But I want everyone to use a specific version (MyPKG_3.14) for this analysis while still allowing development of more recent versions. If I understand correctly, "R --vanilla" will load the same dependencies for everyone.
Once we are done, we will save the working environment as a VM to maintain a stable reproducible environment. So a temporary (6 month) solution will suffice.
I have come up with two potential solutions, but am not sure if either is sufficient.
ask the server admin to install MyPKG_3.14 into the default R path and then provide the following code in the protocol:
R --vanilla
library(MyPKG)
....
or
compile MyPKG_3.14 in a specific library, e.g. lib.loc = "/home/share/lib/R/MyPKG_3.14", and then provide
R --vanilla
library(MyPKG)
Are both of these approaches sufficient to ensure that everyone is running the same version?
Is one preferable to the other?
Are there other unforseen issues that may arise?
Is there a preferred option for standardising the multiple analyses?
Should I include a test of the output of SessionInfo()?
Would it be better to create a single account on the server for everyone to use?
Couple of points:
Use system-wide installations of packages, e.g. the Debian / Ubuntu binary for R (incl the CRAN ports) will try to use /usr/local/lib/R/site-library (which users can install too if added to group owning the directory). That way everybody gets the same version
Use system-wide configuration, e.g. prefer $R_HOME/etc/ over the dotfiles below ~/. For the same reason, the Debian / Ubuntu package offers softlinks in /etc/R/
Use R's facilties to query its packages (eg installed.packages()) to report packages and versions.
Use, where available, OS-level facilities to query OS release and version. This, however, is less standardized.
Regarding the last point my box at home says
> edd#max:~$ lsb_release -a | tail -4
> Distributor ID: Ubuntu
> Description: Ubuntu 12.04.1 LTS
> Release: 12.04
> Codename: precise
> edd#max:~$
which is a start.