Using Docker image without Entrypoint to serve R plumber API - r

I use the geospatial rocker2 image to deploy Rstudio for development and a Shiny app for production. By using a single image, I have a consistent package library, credentials and database connections. I would like to use this same image to serve a plumber API.
Using the standard plumber.R example and the standard plumber Docker example I have tried to serve it as follows:
docker run -v `pwd`/app/plumber.R:/plumber.R --name plumber --restart=unless-stopped \
-p 8000:8000 my_rocker2_fork/geospatial Rscript /plumber.R
Success, kind of. The plumber.R file is clearly being sourced, but it is not being "plumbed":
Another issue is that the container continually restarts (this is the output of docker ps - please ignore the node.js container running):
One more oddity is that port 8000 isn't shown. Sometimes it is, sometimes it isn't. I think this is related to the restarting behaviour.
My code isn't plumbed, because I don't have the Entrypoint that is standard in the rstudio/plumber Dockerfile, and I don't think I want this Entrypoint, as it may cause issues with Rstudio Server and the Shiny app that are also in this image. Therefore, I think it is probably optimal to "plumb" by expanding the Rscript command at the end of my Docker run statement:
docker run -v `pwd`/app/plumber.R:/plumber.R -p 8000:8000 my_rocker2_fork/geospatial \
'Rscript pr("/plumber.R") %>% pr_run(port = 8000)' &
However, this fails because of all the special characters (like the pipe operator). How can I serve plumber code with an arbitrary Dockerfile without an Entrypoint?

The answer is simple! Call a script that sets the plumbing in motion, e.g.
docker run -v `pwd`/app/plumb_start.R:/plumb_start.R -p 8000:8000 my_rocker2_fork/geospatial \
Rscript plumb_start.R
Where plumb_start.R contains:
pr("plumber.R") %>% pr_run(port=8000)
Make sure that you also expose port 8000 in the Dockerfile.

Related

How to run R Shiny App in Docker Container

I built a Docker Image for an R Shiny App and ran the corresponding container with Docker Toolbox on Windows 10 Home. When trying to open the App with my web browser, only the index is shown. I don't know why the app isn't executed.
The log shows me this:
*** warning - no files are being watched ***
[2019-08-12T15:34:42.688] [INFO] shiny-server - Shiny Server v1.5.12.1 (Node.js v10.15.3)
[2019-08-12T15:34:42.704] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2019-08-12T15:34:43.100] [INFO] shiny-server - Starting listener on http://[::]:3838
I already specified the app host-to-container path by executing the following command which refers to a docker hub image:
docker run --rm -p 3838:3838 -v /C/Docker/App/:/srv/shinyserver/ -v /C/Docker/shinylog:/var/log/shiny-server/ didsh123/ps_app:heatmap
My Docker File looks like the following:
# get shiny serves plus tidyverse packages image
FROM rocker/shiny-verse:latest
# system libraries of general use
RUN apt-get update && apt-get install -y \
sudo \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev \
libxt-dev \
libssl-dev \
libssh2-1-dev
##Install R packages that are required--> were already succesfull
RUN R -e "install.packages(c('shinydashboard','shiny', 'plotly', 'dplyr', 'magrittr'))"
#Heatmap related packages
RUN R -e "install.packages('gpclib', type='source')"
RUN R -e "install.packages('rgeos', type='source')"
RUN R -e "install.packages('rgdal', type='source')"
# copy app to image
COPY ./App /srv/shiny-server/App
# add .conf file to image/container to preserve log file
COPY ./shiny-server.conf /etc/shiny-server/shiny-server.conf
##When run image and create a container, this container will listen on port 3838
EXPOSE 3838
###Avoiding running as root --> run container as user instead
# allow permission
RUN sudo chown -R shiny:shiny /srv/shiny-server
# execute in the following as user --> imortant to give permission before that step
USER shiny
##run app
CMD ["/usr/bin/shiny-server.sh"]
So when I address the docker ip and the assessed port in the browser, the app should run there, but only the index is displayed. I use the following line:
http://192.168.99.100:3838/App/
I'm glad for every hint or advice. I'm new to Docker, so I'm also happy for detailed explanations.
To use shiny with docker, I suggest you use the golem package. golem provides a framework for builing shiny apps. If you have an app developed according to their framework, the function golem::add_dockerfile() can be used to create dockerfiles automatically.
If you are not interested in a framework, You can still have a look at the source for add_dockerfile() to see how they manage the deployment. Their strategy is to use shiny::runApp() with the port argument. Therefore, shiny-server is not necessary in this case.
The Dockerfile in golem looks roughly like this
FROM rocker/tidyverse:3.6.1
RUN R -e 'install.packages("shiny")'
COPY app.R /app.R
EXPOSE 3838
CMD R -e 'shiny::runApp("app.R", port = 3838, host = "0.0.0.0")'
This will make the app available on port 3838. Of course, you will have to install any other R packages and system dependencies.
Additional tips
To increase reproducibility, I would suggest you use remotes::install_version() instead of install.packages().
If you are going to deploy several applications with similar dependencies (for example shinydashboard), it makes sense to write your own base image that can be used in place of rocker/tidyverse:3.6.1. This way, your builds will be much quicker.
Minimal Reproducible Example
Create an empty directory (can be called anything you want)
Inside it, create two things:
i. A file called Dockerfile
ii. An empty directory called app
Place your shiny app inside the directory called app.
Your shiny app can be a single app.R file, or, for older shiny apps, ui.R and server.R. Either way is fine (see here for more on that).
If unsure about any of the above, just copy the files found here.
Place this in Dockerfile
FROM rocker/shiny:latest
COPY ./app/* /srv/shiny-server/
CMD ["/usr/bin/shiny-server"]
In the terminal, cd to the root of the empty directory you created in step 1, and build the image with
docker build -t shinyimage .
Run the container with
docker run -p 3838:3838 shinyimage
Finally, visit this url to see your shiny app: http://localhost:3838/
Here's a copy of all of the above in case anything's unclear.
Check the logs for any useful information? And exec into the container to verify if the App content is copied to the correct location.
Because the way /App content is copied looks incorrect
The content of /App is copied into the image to /srv/shiny-server/App during the build phase and you are trying to override /srv/shiny-server content using -v option when running the container.
Looks like during runtime the App data copied is being overwritten.
Try without -v /C/Docker/App/:/srv/shinyserver/ or use -v /C/Docker/App/:/srv/shinyserver/App/
docker run --rm -p 3838:3838 -v /C/Docker/shinylog:/var/log/shiny-server/ didsh123/ps_app:heatmap

Deploy shiny app in rocker/shiny docker

Well, I'm new at Docker and I need to implement a Shiny app in a Docker Container.
I have the image from https://hub.docker.com/r/rocker/shiny/, that includes Shiny Server, but I don't know how to deploy my app in the server.
I want to deploy the app in the server, install the required packages for my app into the Docker, save the changes and export the image/container.
As I said, I'm new at Docker and I don't know how it really works.
Any idea?
I guess you should start by creating a Dockerfile in a specific folder which would look like something like this :
FROM rocker/shiny:latest
RUN echo 'install.packages(c("package1","package2", ...), \
repos="http://cran.us.r-project.org", \
dependencies=TRUE)' > /tmp/packages.R \
&& Rscript /tmp/packages.R
EXPOSE 3838
CMD ["/usr/bin/shiny-server.sh"]
Then go into this folder and build your image, giving it a name by using this command :
docker build -t your-tag .
Finally, once your image is built you can create a container, and if you don't forget to map the volume and the port, you should be able to find it at localhost:3838 with the following command launched from the folder containing the srv folder :
docker run --rm -p 3838:3838 -v $PWD/srv/shinyapps/:/srv/shiny-server/ -v $PWD/srv/shinylog/:/var/log/shiny-server/ your-tag
As said in the Docker documentation at the following address https://hub.docker.com/r/rocker/shiny/, you might want to launch it in detached mode with -d option and map it with your host's port 80 for a real deployment.
The link(https://hub.docker.com/r/rocker/shiny/) covers how to deploy the shiny server.
Simplest way would be:
docker run --rm -p 3838:3838 rocker/shiny
If you want to extend shiny server, you can write your own Dockerfile and start with shiny image as base image.(https://docs.docker.com/engine/reference/builder/)
Dockerfile:
FROM rocker/shiny:latest

Plotting R objects within docker container

I am new to docker (working on OSX) and am trying to run R scripts within it and came across a hurdle when tried to create plots in a loop with docker. The code recurrently calls a script like that:
pdf(plotname)
boxplot(X)
dev.off()
The routine seems to work and does not produce any error message, but the plots are not created.
Any suggestions as to how to overcome this?
Original docker run
#!/bin/bash
docker run -vpwd"/data":/data -ti jbms/fuzzycmeans $1
$1 is the datafilename
Thank you in advance
Edit: This answer require further information to get the container working on OSX.
Share your X11 display to the docker container:
docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix$DISPLAY \
# the rest of your docker run
So it is able to open graphics windows

Docker shows inconsistent behaviour when creating container from image

I am developing a web application which depends on a moodle system, as it uses moodles webservices. For my automated tests, I wanted to use docker to provide a preconfigured moodle-application on all my machines. Therefore I created a docker image, which I import from a .tar.gz file.
However, creating a new container-instance from this image behaves inconsistently. Sometimes the container boots up correctly and everything works fine. However, sometimes the container starts but the moodle-website is not reachable. If I connect my bash to the container using docker exec -it <container> bash I see that apache is running. The error logs do not show any entries which might be related to this issue.
If I kill the container instance and boot it up again, everything works as expected (sometimes this step has to be repeated multiple times). Do you have any idea what could be the reason for this strange behaviour? Anyone experiencing similar issues?
Docker is running on Ubuntu 14:04. The problem appears on several machines. The script which imports the image and starts the container looks like this:
#!/usr/bin/env bash
docker rm -f moodle
docker load < my-moodle.tar.gz
docker run -d -p 8080:80 -p 8443:443 -p 3306:3306 --name moodle moodle-image
Thanks in advance!
Successful container startup depends on your container entrypoint and external resources (if the entrypoint has external dependencies). What is the entrypoint? Does it depend on external resources?

Run a service automatically in a docker container

I'm setting up a simple image: one that holds Riak (a NoSQL database). The image starts the Riak service with riak start as a CMD. Now, if I run it as a daemon with docker run -d quintenk/riak-dev, it does start the Riak process (I can see that in the logs). However, it closes automatically after a few seconds. If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started (UPDATE: see answers for an explanation for this). In fact, no services are running at all. I can start it manually using the terminal, but I would like Riak to start automatically. I figure this behavior would occur for other services as well, Riak is just an example.
So, running/restarting the container should automatically start Riak. What is the correct approach of setting this up?
For reference, here is the Dockerfile with which the image can be created (UPDATE: altered using the chosen answer):
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y openssh-server curl
RUN curl http://apt.basho.com/gpg/basho.apt.key | apt-key add -
RUN bash -c "echo deb http://apt.basho.com precise main > /etc/apt/sources.list.d/basho.list"
RUN apt-get update
RUN apt-get -y install riak
RUN perl -p -i -e 's/(?<=\{http,\s\[\s\{")127\.0\.0\.1/0.0.0.0/g' /etc/riak/app.config
EXPOSE 8098
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
EDIT: -f changed to -F in CMD in accordance to sesm his remark
MY OWN ANSWER
After working with Docker for some time I picked up the habit of using supervisord to tun my processes. If you would like example code for that, check out https://github.com/Krijger/docker-cookbooks. I use my supervisor image as a base for all my other images. I blogged on using supervisor here.
To keep docker containers running, you need to keep a process active in the foreground.
So you could probably replace that last line in your Dockerfile with
CMD /bin/riak console
Or even
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
Note that you can't have multiple lines of CMD statements, only the last one gets run.
Using tail to keep container alive is a hack. Also, note, that with -f option container will terminate when log rotation happens (this can be avoided by using -F instead).
A better solution is to use supervisor. Take a look at this tutorial about running Riak in a Docker container.
The explanation for:
If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started
is as follows. Using CMD in the Dockerfile is actually the same functionality as starting the container using docker run {image} {command}. As Gigablah remarked only the last CMD is used, so the one written in the Dockerfile is overwritten in this case.
By using CMD /bin/riak start && tail -f /var/log/riak/erlang.log.1 in the Buildfile, you can start the container as a background process using docker run -d {image}, which works like a charm.
"If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started"
It sounds like you only want to be able to monitor the log when you attach to the container. My use case is a little different in that I want commands started automatically, but I want to be able to attach to the container and be in a bash shell. I was able to solve both of our problems as follows:
In the image/container, add the commands you want automatically started to the end of the /etc/bash.bashrc file.
In your case just add the line /bin/riak start && tail -F /var/log/riak/erlang.log.1, or put /bin/riak start and tail -F /var/log/riak/erlang.log.1 on separate lines depending on the functionality desired.
Now commit your changes to your container, and run it again with: docker run -i -t quintenk/riak-dev /bin/bash. You'll find the commands you put in the bashrc are already running as you attach.
Because I want a clean way to have the process exit later I make the last command a call to the shell's read which causes that process to block until I later attach to it and hit enter.
arthur#macro:~/docker$ sudo docker run -d -t -i -v /raid:/raid -p 4040:4040 subsonic /bin/bash -c 'service subsonic start && read -p "waiting"'
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4]
f27229a260c9
arthur#macro:~/docker$ sudo docker ps
[sudo] password for arthur:
ID IMAGE COMMAND CREATED STATUS PORTS
35f253bdf45a subsonic:latest /bin/bash -c service 2 days ago Up 2 days 4040->4040
arthur#macro:~/docker$ sudo docker attach 35f253bdf45a
arthur#macro:~/docker$ sudo docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
as you can see the container exits after you attach to it and unblock the read.
You can of course use a more sophisticated script than read -p if you need to do other clean up, such as stopping services and saving logs etc.
I use a simple trick whenever I start building a new docker container. To keep it alive, I use a ping in the entrypoint script.
So in the Dockerfile, when using debian, for instance, I make sure I can ping.
This is btw, always nice, to check what is accessible from within the container.
...
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y iputils-ping
...
ENTRYPOINT ["entrypoint.sh"]
And in the entrypoint.sh file
#!/bin/bash
...
ping 10.10.0.1 >/dev/null 2>/dev/null
I use this instead of CMD bash, as I always wind up using a startup file.

Resources