I recently installed the data science notebook for Jupyter but i cant seem to install any themes on it.
Using the local version i have installed a dark theme and i am used to it.
Following this guide and the install section I tried making /custom/ folder and added a cascading style sheet into the mounted volume. But it doesn't seem to work.
Is there anyway i can install a custom theme on the docker image?
My workaround is: add the custom.css file locally under '~/.jupyter/custom/'. When the docker container runs, it will automatically adopt the theme.
You can actually install jupyter-themes and opt for a theme on Dockerfile itself. Here is a example for reference
Install jupyter-themes using pip
RUN pip3 install jupyterthemes
Opt for a theme
CMD ["bash", "-c", "jt -t solarizedd -T -N && jupyter notebook --port=8888 --no-browser --ip=0.0.0.0 --allow-root --notebook-dir=/home/user/workdir"]
Here is a complete docker file sample for reference
FROM ubuntu:20.04
# Install Python and other dependencies
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
wget
# Install Jupyter
RUN pip3 install jupyter
RUN pip3 install jupyterthemes
# Create a user with a home directory
RUN useradd --create-home --home-dir /home/user user
USER user
# Mount a volume for the working directory
VOLUME /home/user/workdir
# Set the default command to launch Jupyter
CMD ["bash", "-c", "jt -t solarizedd -T -N && jupyter notebook --port=8888 --no-browser --ip=0.0.0.0 --allow-root --notebook-dir=/home/user/workdir"]
# CMD ["jupyter", "notebook", "--no-browser", "--ip=0.0.0.0", "--allow-root", "--notebook-dir=/home/user/workdir"]
You can also include the said command on a docker-compose file, if you are using a docker image
version: "3"
services:
notebook:
image: jupyter/datascience-notebook
ports:
- "8888:8888"
environment:
JUPYTER_ENABLE_LAB: "yes"
volumes:
- .:/home/user/workdir
command: bash -c "pip install jupyterthemes && jt -t solarizedd -T -N"
Here i have chosen the solarizedd theme, just tweak and include the theme that best suits for you.
Hope this helps
Related
I want to create environment in SageMaker on AWS with miniconda, and make it available as kernels in Jupyter when I restart the session. But the SageMaker keep failing.
I followed the instructions found in here:
https://aws.amazon.com/premiumsupport/knowledge-center/sagemaker-lifecycle-script-timeout/
basically it says:
"Create a custom, persistent Conda installation on the notebook instance's Amazon Elastic Block Store (Amazon EBS) volume: Run the on-create script in the terminal of an existing notebook instance. This script uses Miniconda to create a separate Conda installation on the EBS volume (/home/ec2-user/SageMaker/). Then, run the on-start script as a lifecycle configuration to make the custom environment available as a kernel in Jupyter. This method is recommended for more technical users, and it is a better long-term solution."
I run this on-create.sh script on the terminal on Jupyter:
on-create.sh:
#!/bin/bash
set -e
sudo -u ec2-user -i <<'EOF'
unset SUDO_UID
# Install a separate conda installation via Miniconda
WORKING_DIR=/home/ec2-user/SageMaker/custom-environments
mkdir -p "$WORKING_DIR"
wget https://repo.anaconda.com/miniconda/Miniconda3-4.6.14-Linux-x86_64.sh -O "$WORKING_DIR/miniconda.sh"
bash "$WORKING_DIR/miniconda.sh" -b -u -p "$WORKING_DIR/miniconda"
rm -rf "$WORKING_DIR/miniconda.sh"
# Create a custom conda environment
source "$WORKING_DIR/miniconda/bin/activate"
KERNEL_NAME="conda-test-env"
PYTHON="3.6"
conda create --yes --name "$KERNEL_NAME" python="$PYTHON"
conda activate "$KERNEL_NAME"
pip install --quiet ipykernel
# Customize these lines as necessary to install the required packages
conda install --yes numpy
pip install --quiet boto3
EOF
and it creates the "conda-test-env" environment as expected.
Then I add the on-start.sh as lifestyle configuration:
#!/bin/bash
set -e
sudo -u ec2-user -i <<'EOF'
unset SUDO_UID
source "/home/ec2-user/SageMaker/custom-environments/miniconda/bin/activate"
conda activate conda-test-env
python -m ipykernel install --user --name "conda-test-env" --display-name "conda-test-env"
# Optionally, uncomment these lines to disable SageMaker-provided Conda functionality.
# echo "c.EnvironmentKernelSpecManager.use_conda_directly = False" >> /home/ec2-user/.jupyter/jupyter_notebook_config.py
# rm /home/ec2-user/.condarc
EOF
then I update the instance with the new configuration,
and when I start my notebook instance, after few minutes it fails.
I'll appreciate any help.
I'm trying to learn how to deploy a shiny app using Shinyproxy, and I'm using the templated "euler app" (from this repo), but the application does not appear when I navigate to http://localhost:4445. Here's the most similar question I could find, but unfortunately not helpful to my issue: link.
Background
All installations seem fine, and I successfully installed Docker and Java.
The Dockerfile and Docker image work locally, no issues there. The command docker run --rm -p 3838:3838 shiny-euler-app works.
Here is my Dockerfile (copied from the repo):
FROM openanalytics/r-base
MAINTAINER Tobias Verbeke "tobias.verbeke#openanalytics.eu"
# system libraries of general use
RUN apt-get update && apt-get install -y \
sudo \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev \
libxt-dev \
libssl-dev \
libssh2-1-dev \
libssl1.1
# system library dependency for the euler app
RUN apt-get update && apt-get install -y \
libmpfr-dev
# basic shiny functionality
RUN R -e "install.packages(c('shiny', 'rmarkdown'), repos='https://cloud.r-project.org/')"
# install dependencies of the euler app
RUN R -e "install.packages('Rmpfr', repos='https://cloud.r-project.org/')"
# copy the app to the image
RUN mkdir /root/euler
COPY euler /root/euler
COPY Rprofile.site /usr/lib/R/etc/
EXPOSE 3838
CMD ["R", "-e", "shiny::runApp('/root/euler')"]
As well, Shinyproxy works fine with the default openanalytics/shinyproxy-demo Docker image, as you can see:
Problem
The issue I have is when I try and supply a different Shiny app and its accompanying application.yml. Here is the application.yml file I'm using (I've tried to make it as basic as possible, with no authentication, etc):
proxy:
title: Standalone Docker Engine
port: 4445
authentication: none
docker:
url: http://localhost:2375
specs:
- id: euler
display-name: Euler's number
container-cmd: ["R", "-e", "shiny::runApp('/root/euler')"]
container-image: shiny-euler-app
Unfortunately, when I run java -jar shinyproxy-2.4.2.jar (in the directory which contains the shinyproxy-2.4.2.jar file and the application.yml file) I get this blank webpage:
For some reason, I am able to access the Shinyproxy webpage, but the Dockerized Shiny app does not appear.
Would really appreciate any helpful suggestions on where/how I could try and solve this issue. Thanks!
I have a Docker container running a shiny app (Dockerfile here).
Shiny server logs are output to stdout and application logs are written to /var/log/shiny-server. I'm deploying this container to AWS Fargate and logging applications only display stdout which makes debugging an application when deployed challenging. I'd like to write the application logs to stdout.
I've tried a number of potential solutions:
I've tried the solution provided here, but have had no luck.. I added the exec xtail /var/log/shiny-server/ to my shiny-server.sh as the last line in the file. App logs are not written to stdout
I noticed that writing application logs to stdout is now the default behavior in rocker/shiny, but as I'm using rocker/verse:3.6.2 (upgraded from 3.6.0 today) along with RUN export ADD=shiny, I don't think this is standard behavior for the rocker/verse:3.6.2 container with Shiny add-on. As a result, I don't get the default behavior out of the box.
This issue on github suggests an alternative method of forcing application logging to stdout by way of an environment variable SHINY_LOG_STDERR=1 set at runtime but I'm not Linux-savvy enough to know where that env variable needs to be set to be effective. I found this documentation from Shiny Server v1.5.13 which gave suggestions in which file to set the environment variable depending on Linux distro; however, the output from my container when I run cat /etc/os-release is:
which doesn't really line up with any of the distributions in the Shiny Server documentation, thus making the documentation unhelpful.
I tried adding adding the environment variable from the github issue above in the docker run command, i.e.,
docker run --rm -e SHINY_LOG_STDERR=1 -p 3838:3838 [my image]
as well as
docker run --rm -e APPLICATION_LOGS_TO_STDOUT=true -p 3838:3838 [my image]
and am still not getting the logs to stdout.
I must be missing something here. Can someone help me identify how to successfully get application logs to stdout successfully?
You can add the line ENV SHINY_LOG_STDERR=1 to your Dockerfile (at least, this works with rocker/shiny, not sure about rocker/verse), such as with your Dockerfile:
FROM rocker/verse:3.6.2
## Add shiny capabilities to container
RUN export ADD=shiny && bash /etc/cont-init.d/add
## Install curl and xtail
RUN apt-get update && apt-get install -y \
curl \
xtail
## Add pip3 and other Python packages
RUN sudo apt-get update -y && apt-get install -y python3-pip
RUN pip3 install boto3
## Add R packages
RUN R -e "install.packages(c('shiny', 'tidyverse', 'tidyselect', 'knitr', 'rmarkdown', 'jsonlite', 'odbc', 'dbplyr', 'RMySQL', 'DBI', 'pander', 'sciplot', 'lubridate', 'zoo', 'stringr', 'stringi', 'openxlsx', 'promises', 'future', 'scales', 'ggplot2', 'zip', 'Cairo', 'tinytex', 'reticulate'), repos = 'https://cran.rstudio.com/')"
## Update and install
RUN tlmgr update --self --all
RUN tlmgr install ms
RUN tlmgr install beamer
RUN tlmgr install pgf
#Copy app dir and theme dirs to their respective locations
COPY iarr /srv/shiny-server/iarr
COPY iarr/reports/interim_annual_report/theme/SwCustom /opt/TinyTeX/texmf-dist/tex/latex/beamer/
#Force texlive to find my custom beamer theme
RUN texhash
EXPOSE 3838
## Add shiny-server information
COPY shiny-server.sh /usr/bin/shiny-server.sh
COPY shiny-customized.config /etc/shiny-server/shiny-server.conf
## Add dos2unix to eliminate Win-style line-endings and run
RUN apt-get update -y && apt-get install -y dos2unix
RUN dos2unix /usr/bin/shiny-server.sh && apt-get --purge remove -y dos2unix && rm -rf /var/lib/apt/lists/*
# Enable Logging from stdout
ENV SHINY_LOG_STDERR=1
RUN ["chmod", "+x", "/usr/bin/shiny-server.sh"]
CMD ["/usr/bin/shiny-server.sh"]
I'm trying to run my shiny app in a docker container.
My app folder structure is like this:
myApp (directory)
-app (directory)
--ui.R
--server.R
--global.R
--style.css
--mydata.xlsx
--mydata2.rds
--functions.R (contains functions I use in app)
-Dockerfile
-shiny-server.conf
-shiny-server.sh
I can get into my directory myApp and run shiny locally by doing runApp('app'). my shiny app runs perfectly.
However, when I try to build the image and run it, it give me error.
docker build -t myshinyapp_obs .
docker run -p 80:80 myshinyapp_obs
The application failed to start.
The application exited during initialization.
The docker image building process seems to be fine.
When run it on docker, I got error.
The interesting thing is when I simply copy any app from shiny gallery and put the ui.R and server.R file under the app folder, it works fine!!!
My question is why my app is not working? given the fact:
my app work perfectly locally
After copying shiny example from gallery into the app folder, the example app works fine.
How can that happen? I cannot figure it out. Spent hours trying to make it work but failed.
Below is my Dockfile:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
# TODO: add further package if you need!
RUN R -e "install.packages(c('devtools','readxl','tidyverse','rlang','shiny','shinythemes', 'DT'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
I am looking into docker to distribute a shiny application that also requires RStudio. The primary goal is easy installation at hospitals under Windows. Everything that requires character input into black boxes will certainly fail during installation by non-IT people.
My previous attempts used vagrant, but installing vagrant alone proved to be a hurdle.
The rocker repository, has an RStudio and a Shiny , and for my own installation both work together. However, I would like to create a combined application for easier installation.
What is the recommended workflow? Start with RStudio, and manually add Shiny?
Or merge the dockerfiles code from both Rockers, starting with r-base? Or use compose tool?
The point of Docker, in general, is isolation of services so that they can be updated/changed without effecting others. My recommendation would be to use docker-compose, instead. Below is an example docker-compose yaml file that serves both rstudio and shiny on the same server at different subdomains using the incredibly useful docker-gen by Jason Wilder. All R docker images used below are courtesy of Rocker or more directly Rocker Docker Hub. These are very very reliable because, well, Dirk Eddelbeutel and Carl Boettiger made them. In this example I've also included some options for RStudio such as setting a user/pass and whether or not the user has root access. There are more instructions on using the Rocker RStudio image available on this wiki page:
Change the following:
your_user to your username on the server
SOME_USER to your desired RStudio username
SOME_PASS to your desired Rstudio password
*.DOMAIN.tld to your domain, don't forget to add A records for your subdomains.
nginx1:
image: nginx
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- /etc/nginx/conf.d
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /home/your_user/services/volumes/proxy/certs:/etc/nginx/certs:ro
nginx-gen:
links:
- "nginx1"
image: jwilder/docker-gen
container_name: nginx-gen
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /home/your_user/services/volumes/proxy/templates/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
volumes_from:
- nginx1
entrypoint: /usr/local/bin/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
rstudio:
links:
- "nginx1"
image: rocker/hadleyverse
container_name: rstudio
ports:
- "8787:8787"
environment:
- VIRTUAL_PORT=8787
- ROOT=TRUE
- VIRTUAL_HOST=rstudio.DOMAIN.tld
- USER=SOME_USER
- PASSWORD=SOME_PASS
shiny:
links:
- "nginx1"
image: rocker/shiny
container_name: shiny
environment:
- VIRTUAL_HOST=shiny.DOMAIN.tld
volumes:
- /home/your_user/services/volumes/shiny/apps:/srv/shiny-server/
- /home/your_user/services/volumes/shiny/logs:/var/log/
- /home/your_user/services/volumes/shiny/packages:/home/shiny/
It's trivial to add more services like a blog, for example, just follow the pattern or search the internet for a docker-compose version of your service and add it.
Interesting question, but I'm not sure I understand the advantage of having the shiny-server and the rstudio-server instances served from the same container.
Is the purpose so that the two containers share the same R libraries (e.g. so a package doesn't need to be installed separately on each) or merely to have one docker container instead of two? Just having to run two docker commands instead of one doesn't seem that onerous, but maybe I'm underestimating.
Sharing the underlying libraries seems like a valid objective though, and I don't think there's an ideal solution available yet.
I feel the most docker-esque solution would be to do this via container orchestration/compose tool as you mention. This is the usual way to combine separate services (e.g. web server and database) without building one on top of the other.
Unfortunately, the tooling for orchestration based on mapping volumes is not nearly as well developed as it is for mapping ports.
Imagine running the rstudio as a volume container:
docker run --name rstudio -v /usr/local/lib/R/site.library rocker/rstudio true
(If you wanted RStudio access at the same time, one could instead run this as:)
docker run --name rstudio -dP -v /usr/local/lib/R/site.library rocker/rstudio
You can then use the the site.library from the rstudio container in place of that on the shiny container with a command like:
docker run --volumes-from rstudio -dP rocker/shiny
Unfortunately, this clobbers the site.library of the shiny container. To work around this, you'd want to mount the library of the rstudio container in a different place, but there's no easy syntax for this like we already have with port links. It can be done though, see:
How to map volume paths using Docker's --volumes-from?
There's an open thread on this issue in the rocker repo too.
I have developed a working single docker for
R
RStudio (server)
Shiny Server (free edition)
I built it exactly for the same reasons mentioned by #Dieter Menne. It may be not ideal for ops, but it great for dev (especially if the team members all use different envs. like mac, windows etc.).
It is on Centos 6 as this is the env. I use at work.
This is the dockerfile:
FROM centos:centos6.7
MAINTAINER enzo smartinsightsfromdata
RUN yum -y install epel-release
RUN yum update -y && yum clean all
# RUN yum reinstall -y glibc-common
RUN yum install -y locales java-1.7.0-openjdk-devel tar
# Misc packages
RUN yum groupinstall -y "Development Tools"
# R devtools pre-requisites:
RUN yum install -y wget git xml2 libxml2-devel curl curl-devel openssl-devel
WORKDIR /home/root
RUN yum install -y R
RUN wget http://cran.r-project.org/src/contrib/rJava_0.9-7.tar.gz
RUN R CMD INSTALL rJava_0.9-7.tar.gz
RUN R CMD javareconf \
&& rm -rf rJava_0.9-7.tar.gz
#-----------------------
# Add RStudio binaries to PATH
# export PATH="/usr/lib/rstudio-server/bin/:$PATH"
ENV PATH /usr/lib/rstudio-server/bin/:$PATH
ENV LANG en_US.UTF-8
RUN yum install -y openssl098e supervisor passwd pandoc
# RUN wget http://download2.rstudio.org/rstudio-server-rhel-0.99.484-x86_64.rpm
# Go for the bleading edge:
RUN wget https://s3.amazonaws.com/rstudio-dailybuilds/rstudio-server-rhel-0.99.697-x86_64.rpm
RUN yum -y install --nogpgcheck rstudio-server-rhel-0.99.697-x86_64.rpm \
&& rm -rf rstudio-server-rhel-0.99.484-x86_64.rpm
RUN groupadd rstudio \
&& useradd -g rstudio rstudio \
&& echo rstudio | passwd rstudio --stdin
RUN R -e "install.packages(c('shiny', 'rmarkdown'), repos='http://cran.r-project.org', INSTALL_opts='--no-html')"
RUN wget https://download3.rstudio.org/centos5.9/x86_64/shiny-server-1.4.0.756-rh5-x86_64.rpm
RUN yum -y install --nogpgcheck shiny-server-1.4.0.756-rh5-x86_64.rpm \
&& rm -rf shiny-server-1.4.0.756-rh5-x86_64.rpm
RUN mkdir -p /var/log/shiny-server \
&& chown shiny:shiny /var/log/shiny-server \
&& chown shiny:shiny -R /srv/shiny-server \
&& chmod 777 -R /srv/shiny-server \
&& chown shiny:shiny -R /opt/shiny-server/samples/sample-apps \
&& chmod 777 -R /opt/shiny-server/samples/sample-apps
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN mkdir -p /var/log/supervisor \
&& chmod 777 -R /var/log/supervisor
EXPOSE 8787 3838
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
This is how the supervisord.conf file looks like:
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
pidfile = /tmp/supervisord.pid
[program:rserver]
user=root
command=/usr/lib/rstudio-server/bin/rserver
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
startsecs=0
autorestart=false
[program:shinyserver]
user=root
command=/usr/bin/shiny-server
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
It is available at my github page: smartinsightsfromdata
I have also developed a working docker for shiny server pro on centos (using shiny server pro temporary edition, valid 45 days only).
Somewhat unfortunately, there is no definite answer, it all depends on how much reusability you would be looking for and whether an upstream base image is well maintained. The is also images size tradeoff, more layers there are, bigger the resulting image gets.