How to install JupyterHub with Docker on a local machine and in a sub domain - jupyter-notebook

I will run JupyterHub in a sub domain. Here is the Dockerfile, jupyterhub_config.py, .gitlab-ci.yml.
My first question is how to configure the jupyter_config.py. How can I load the jupyterhub_config.py on the build in the container?
How do I start Jupyterhub in the .gitlab-ci.yml for tests and how do I copy the application in the sub domain? I wrote a README.md. I need a little help for the JupyterHub. If all works fine, I will write a complete HOWTO Install JupyterHub on a local machine and in a sub domain by a provider.
FROM continuumio/miniconda3
# Updating packages
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
git \
nano \
unzip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install conda and Jupyter
RUN conda update -y conda
RUN conda install -c conda-forge jupyter_nbextensions_configurator \
jupyterhub \
jupyterlab \
matplotlib \
pandas \
scipy
# Setup application
EXPOSE 8000
CMD ["jupyterhub", "--ip='*'", "--port=8000", "--no-browser", "--allow-root"]
The .gitlab-ci.yml
image: docker:latest
variables:
CONTAINER_IMAGE: registry.gitlab.com/joklein
DOCKER_IMAGE: jupyterhub
TAG: 0.1.0
services:
- docker:dind
stages:
- build
- test
- release
- deploy
before_script:
- echo "$GITLAB_PASSWORD" | docker login registry.gitlab.com --username $GITLAB_USER --password-stdin
build:
stage: build
script:
- docker build -t $CONTAINER_IMAGE/$DOCKER_IMAGE .
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE
test:
stage: test
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
# - docker run $CONTAINER_IMAGE/$DOCKER_IMAGE -dt -p 8000:8000 --name $DOCKER_IMAGE
release:
stage: release
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
- docker tag $CONTAINER_IMAGE/$DOCKER_IMAGE:latest $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
only:
- master
deploy:
stage: deploy
image: alpine:latest
before_script:
- apk update && apk add git openssh-client rsync
script:
- mkdir .public
- cp -r * .public
- mv .public public
- mkdir "${HOME}/.ssh"
- echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
- echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
- chmod 700 "${HOME}/.ssh/id_rsa"
- rsync -hrvz --delete --exclude=_ public/ user#example.com:www/jupyter/
only:
- master
The jupyterhub_config.py
c = get_config()
# Letsencrypt (https://letsencrypt.org/) to obtain a free, trusted SSL
# certificate.
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
c.JupyterHub.port = 443
#
# Change from JupyterHub to JupyterLab
c.Spawner.default_url = '/lab'
c.Spawner.debug = True
#
# # Specify users and admin
c.Authenticator.whitelist = {"systemuser"}
c.Authenticator.admin_users = {"systemuser"}
Docker base image of JupyterHub and JupyterLab
JupyterHub is a multi-user server for Jupyter notebooks. JupyterLab is the
next-generation web-based user interface for the Jupyter Project. This
JupyterHub is a Docker base image for JupyterHub and JupyterLab
that works as a stand-alone application and in a (sub) domain.
Images derived from this image can either run as a stand-alone server, or
function as a volume image for your server. You can also use them in a CI/CD
system such as GitLab CI to build your content prior to bundling it into a
standalone server container.
Building your JupyterHub image
Based on this structure, you can easily build an image for your needs. There are two options for using the image you generated:
as a stand-alone image
as a volume image for your webserver
The simplest way to build your own image is to use a Dockerfile. This is only an example. If you need more software packages you can install them with this
Dockerfile and conda.
Build the container
docker build -t juypterhub .
Your JupyterHub with JupyterLab is automatically generated during this build.
Run the container
docker run -p 8000:8000 -d --name jupyterhub jupyterhub jupyterhub
-p is used to map your local port 8000 to the container port 8000
-d is used to run the container in background. JupyterHub will just write
logs so no need to output them in your terminal unless you want to troubleshoot a server error.
-- name jupyterhub names your container jupyterhub
jupyterhub the image
jupyterhub is the last command used to start the jupyterhub server
and your JupyterHub with Jupyterlab is now available of http://localhost:8000.
Start / Stop JupyterHub
docker start / stop juyterhub
Configure JupyterHub
Let's encrypt certificates for JupyterHub
To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to
demonstrate control over the domain. With Let’s Encrypt, you do this using
software that uses the ACME protocol, which typically runs on your web host.
Change to zerossl.com and generate a certificate for your domain. As the
result you get four files, domain-key.txt, domain-crt.txt, domain-csr.txt, account-key.txt. This files uses base 64, which is readable in
ASCII, not binary format. The certificates are already in PEM format. Just
change the extension to *.pem.
For JupyterHub only the files domain-key.txt and domain-crt are needed.
cp domain-crt.txt fullchain.pem
cp domain-key.txt privkey.pem
Add a System user in the container
By default JupyterHub searches for users on the server. In order to be able to
log in to our new JupyterHub server we need to connect to the JupyterHub docker
container and create a new system user with a password.
docker exec -it jupyterhub bash
useradd --create-home systemuser
passwd systemuser
exit
The command docker exec -it jupyterhub bash will spawn a root shell in your
docker container. You can use the root shell to create system users in the
container. These accounts will be used for authentication in JupyterHub's
default configuration.
The first command useradd creates a new user named systemuser. The second will
ask you for a password.

The all process might be simpler with GitLab 12.0 (June 2019), and its
Git integration for JupyterHub
Deploying JupyterHub via GitLab’s Kubernetes integration provides an easy way to get started with Jupyter notebooks, which can be used to create and share documents that contain live code, visualizations, and even runbooks.
Starting with GitLab 12.0, JupyterLab’s Git extension is automatically provisioned and configured when installing JupyterHub onto your Kubernetes cluster.
This integration enables full version control of your notebooks as well as issuance of Git commands within Jupyter. Git commands can be issued via the Git tab on the left panel or via Jupyter’s command line prompt.
See documentation and gitlab-ce issue 47138.

jupyterhub --generate-config
This is what on the documentation
It created a config.py file in /srv/jupyterhub

Related

How to find error logs when my dockerized shiny app does not work

I'm trying to put my shiny app in docker container. My shiny app works totally fine on my local computer. But after dockerize my shiny app, I always have error message on my localhost like The application failed to start. The application exited during initialization..
I have no idea why that happens. I'm new to docker. How can I find the error logs when I run the docker image? I need the log to know what goes wrong.
Here is my dockfile:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
# TODO: add further package if you need!
RUN R -e "install.packages(c( 'tidyverse', 'ggplot2','shiny','shinydashboard', 'DT', 'plotly', 'RColorBrewer'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
I built image and ran like below:
docker build -t myshinyapp .
docker run -p 80:80 myshinyapp
Usually the logs for any (live or dead) container can be found by just using:
docker logs full-container-name
or
docker logs CONTAINERID
(replacing the actual ID of your container)
As first said, this usually works as well even for stopped (not still removed) containers, which you can list with:
docker container ls -a
or just
docker ps -a
However, sometimes you won't even have a log, since the container was never created at all (which I think, by experience, fits more to your case)
And it can be happening simply because the docker engine is unable to allocate all of the resources that your service definition is requiring to have available.
The application failed to start. The application exited during initialization
is usually reflect of your docker engine being unable to get the required resources.
And the most common case for that, is just as simple as your host ports:
If you have another service (being dockerized or not) using (for example) that port that you want to use for your service (in your case, port 80) then Docker would just be unable to start your container.
So... in short... the easiest fix for that situation (and your first try whenever you face this kind of issues) is just to bind any other port from your host (say: 8080), to that 80 port that your service will be listening to internally (inside your container):
docker run -p 8080:80 myshinyapp
The same principle applies to unallocatable volumes (e.g.: trying to bind a volume as read-only that doesn't actually exist in the host)
As an aside comment/trick:
Since you're not setting a name for your container, you will need to use the container id instead when looking for its logs.
But instead of typing (or copy-pasting) the full container id (usually something like: 1283c66babea or even larger) you can just type in a few first digits instead, and it will still work as expected:
docker logs 1283c6 or docker logs 1283 or even docker logs 128
(of course... as long as you don't have any other 128***** container)

Deploy shiny app in rocker/shiny docker

Well, I'm new at Docker and I need to implement a Shiny app in a Docker Container.
I have the image from https://hub.docker.com/r/rocker/shiny/, that includes Shiny Server, but I don't know how to deploy my app in the server.
I want to deploy the app in the server, install the required packages for my app into the Docker, save the changes and export the image/container.
As I said, I'm new at Docker and I don't know how it really works.
Any idea?
I guess you should start by creating a Dockerfile in a specific folder which would look like something like this :
FROM rocker/shiny:latest
RUN echo 'install.packages(c("package1","package2", ...), \
repos="http://cran.us.r-project.org", \
dependencies=TRUE)' > /tmp/packages.R \
&& Rscript /tmp/packages.R
EXPOSE 3838
CMD ["/usr/bin/shiny-server.sh"]
Then go into this folder and build your image, giving it a name by using this command :
docker build -t your-tag .
Finally, once your image is built you can create a container, and if you don't forget to map the volume and the port, you should be able to find it at localhost:3838 with the following command launched from the folder containing the srv folder :
docker run --rm -p 3838:3838 -v $PWD/srv/shinyapps/:/srv/shiny-server/ -v $PWD/srv/shinylog/:/var/log/shiny-server/ your-tag
As said in the Docker documentation at the following address https://hub.docker.com/r/rocker/shiny/, you might want to launch it in detached mode with -d option and map it with your host's port 80 for a real deployment.
The link(https://hub.docker.com/r/rocker/shiny/) covers how to deploy the shiny server.
Simplest way would be:
docker run --rm -p 3838:3838 rocker/shiny
If you want to extend shiny server, you can write your own Dockerfile and start with shiny image as base image.(https://docs.docker.com/engine/reference/builder/)
Dockerfile:
FROM rocker/shiny:latest

RStudio and Shiny in one dockerfile

I am looking into docker to distribute a shiny application that also requires RStudio. The primary goal is easy installation at hospitals under Windows. Everything that requires character input into black boxes will certainly fail during installation by non-IT people.
My previous attempts used vagrant, but installing vagrant alone proved to be a hurdle.
The rocker repository, has an RStudio and a Shiny , and for my own installation both work together. However, I would like to create a combined application for easier installation.
What is the recommended workflow? Start with RStudio, and manually add Shiny?
Or merge the dockerfiles code from both Rockers, starting with r-base? Or use compose tool?
The point of Docker, in general, is isolation of services so that they can be updated/changed without effecting others. My recommendation would be to use docker-compose, instead. Below is an example docker-compose yaml file that serves both rstudio and shiny on the same server at different subdomains using the incredibly useful docker-gen by Jason Wilder. All R docker images used below are courtesy of Rocker or more directly Rocker Docker Hub. These are very very reliable because, well, Dirk Eddelbeutel and Carl Boettiger made them. In this example I've also included some options for RStudio such as setting a user/pass and whether or not the user has root access. There are more instructions on using the Rocker RStudio image available on this wiki page:
Change the following:
your_user to your username on the server
SOME_USER to your desired RStudio username
SOME_PASS to your desired Rstudio password
*.DOMAIN.tld to your domain, don't forget to add A records for your subdomains.
nginx1:
image: nginx
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- /etc/nginx/conf.d
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /home/your_user/services/volumes/proxy/certs:/etc/nginx/certs:ro
nginx-gen:
links:
- "nginx1"
image: jwilder/docker-gen
container_name: nginx-gen
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /home/your_user/services/volumes/proxy/templates/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
volumes_from:
- nginx1
entrypoint: /usr/local/bin/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
rstudio:
links:
- "nginx1"
image: rocker/hadleyverse
container_name: rstudio
ports:
- "8787:8787"
environment:
- VIRTUAL_PORT=8787
- ROOT=TRUE
- VIRTUAL_HOST=rstudio.DOMAIN.tld
- USER=SOME_USER
- PASSWORD=SOME_PASS
shiny:
links:
- "nginx1"
image: rocker/shiny
container_name: shiny
environment:
- VIRTUAL_HOST=shiny.DOMAIN.tld
volumes:
- /home/your_user/services/volumes/shiny/apps:/srv/shiny-server/
- /home/your_user/services/volumes/shiny/logs:/var/log/
- /home/your_user/services/volumes/shiny/packages:/home/shiny/
It's trivial to add more services like a blog, for example, just follow the pattern or search the internet for a docker-compose version of your service and add it.
Interesting question, but I'm not sure I understand the advantage of having the shiny-server and the rstudio-server instances served from the same container.
Is the purpose so that the two containers share the same R libraries (e.g. so a package doesn't need to be installed separately on each) or merely to have one docker container instead of two? Just having to run two docker commands instead of one doesn't seem that onerous, but maybe I'm underestimating.
Sharing the underlying libraries seems like a valid objective though, and I don't think there's an ideal solution available yet.
I feel the most docker-esque solution would be to do this via container orchestration/compose tool as you mention. This is the usual way to combine separate services (e.g. web server and database) without building one on top of the other.
Unfortunately, the tooling for orchestration based on mapping volumes is not nearly as well developed as it is for mapping ports.
Imagine running the rstudio as a volume container:
docker run --name rstudio -v /usr/local/lib/R/site.library rocker/rstudio true
(If you wanted RStudio access at the same time, one could instead run this as:)
docker run --name rstudio -dP -v /usr/local/lib/R/site.library rocker/rstudio
You can then use the the site.library from the rstudio container in place of that on the shiny container with a command like:
docker run --volumes-from rstudio -dP rocker/shiny
Unfortunately, this clobbers the site.library of the shiny container. To work around this, you'd want to mount the library of the rstudio container in a different place, but there's no easy syntax for this like we already have with port links. It can be done though, see:
How to map volume paths using Docker's --volumes-from?
There's an open thread on this issue in the rocker repo too.
I have developed a working single docker for
R
RStudio (server)
Shiny Server (free edition)
I built it exactly for the same reasons mentioned by #Dieter Menne. It may be not ideal for ops, but it great for dev (especially if the team members all use different envs. like mac, windows etc.).
It is on Centos 6 as this is the env. I use at work.
This is the dockerfile:
FROM centos:centos6.7
MAINTAINER enzo smartinsightsfromdata
RUN yum -y install epel-release
RUN yum update -y && yum clean all
# RUN yum reinstall -y glibc-common
RUN yum install -y locales java-1.7.0-openjdk-devel tar
# Misc packages
RUN yum groupinstall -y "Development Tools"
# R devtools pre-requisites:
RUN yum install -y wget git xml2 libxml2-devel curl curl-devel openssl-devel
WORKDIR /home/root
RUN yum install -y R
RUN wget http://cran.r-project.org/src/contrib/rJava_0.9-7.tar.gz
RUN R CMD INSTALL rJava_0.9-7.tar.gz
RUN R CMD javareconf \
&& rm -rf rJava_0.9-7.tar.gz
#-----------------------
# Add RStudio binaries to PATH
# export PATH="/usr/lib/rstudio-server/bin/:$PATH"
ENV PATH /usr/lib/rstudio-server/bin/:$PATH
ENV LANG en_US.UTF-8
RUN yum install -y openssl098e supervisor passwd pandoc
# RUN wget http://download2.rstudio.org/rstudio-server-rhel-0.99.484-x86_64.rpm
# Go for the bleading edge:
RUN wget https://s3.amazonaws.com/rstudio-dailybuilds/rstudio-server-rhel-0.99.697-x86_64.rpm
RUN yum -y install --nogpgcheck rstudio-server-rhel-0.99.697-x86_64.rpm \
&& rm -rf rstudio-server-rhel-0.99.484-x86_64.rpm
RUN groupadd rstudio \
&& useradd -g rstudio rstudio \
&& echo rstudio | passwd rstudio --stdin
RUN R -e "install.packages(c('shiny', 'rmarkdown'), repos='http://cran.r-project.org', INSTALL_opts='--no-html')"
RUN wget https://download3.rstudio.org/centos5.9/x86_64/shiny-server-1.4.0.756-rh5-x86_64.rpm
RUN yum -y install --nogpgcheck shiny-server-1.4.0.756-rh5-x86_64.rpm \
&& rm -rf shiny-server-1.4.0.756-rh5-x86_64.rpm
RUN mkdir -p /var/log/shiny-server \
&& chown shiny:shiny /var/log/shiny-server \
&& chown shiny:shiny -R /srv/shiny-server \
&& chmod 777 -R /srv/shiny-server \
&& chown shiny:shiny -R /opt/shiny-server/samples/sample-apps \
&& chmod 777 -R /opt/shiny-server/samples/sample-apps
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN mkdir -p /var/log/supervisor \
&& chmod 777 -R /var/log/supervisor
EXPOSE 8787 3838
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
This is how the supervisord.conf file looks like:
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
pidfile = /tmp/supervisord.pid
[program:rserver]
user=root
command=/usr/lib/rstudio-server/bin/rserver
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
startsecs=0
autorestart=false
[program:shinyserver]
user=root
command=/usr/bin/shiny-server
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
It is available at my github page: smartinsightsfromdata
I have also developed a working docker for shiny server pro on centos (using shiny server pro temporary edition, valid 45 days only).
Somewhat unfortunately, there is no definite answer, it all depends on how much reusability you would be looking for and whether an upstream base image is well maintained. The is also images size tradeoff, more layers there are, bigger the resulting image gets.

run apps using audio in a docker container

This question is inspired by Can you run GUI apps in a docker container?.
The basic idea is to run apps with audio and ui (vlc, firefox, skype, ...)
I was searching for docker containers using pulseaudio but all containers I found where using pulseaudio streaming over tcp.
(security sandboxing of the applications)
https://gist.github.com/hybris42/ce429de428e5af3a344a
https://github.com/jlund/docker-chrome-pulseaudio
https://github.com/tomparys/docker-skype-pulseaudio
In my case I would prefere playing audio from an app inside the container directly to my host pulseaudio. (without ssh tunneling and bloated docker images)
Pulseaudio because my qt app is using it ;)
it took me some time until i found out what is needed. (Ubuntu)
we start with the docker run command docker run -ti --rm myContainer sh -c "echo run something"
ALSA:
we need /dev/snd and some hardware access as it looks like.
when we put this together we have
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
myContainer sh -c "echo run something"`
In new docker versions without lxc flags you shoud use this:
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--privileged \
myContainer sh -c "echo run something"`
PULSEAUDIO:
update: it may be enought to mount the pulseaudio socket within the container using -v option. this depends on your version and prefered access method. see other answers for the socket method.
Here we need basically /dev/shm, /etc/machine-id and /run/user/$uid/pulse. But that is not all (maybe because of Ubuntu and how they did it in the past). The envirorment variable XDG_RUNTIME_DIR has to be the same in the host system and in your docker container. You may also need /var/lib/dbus because some apps are accessing the machine id from here (may only containing a symbolic link to the 'real' machine id). And at least you may need the hidden home folder ~/.pulse for some temp data (i am not sure about this).
docker run -ti --rm \
-v /dev/shm:/dev/shm \
-v /etc/machine-id:/etc/machine-id \
-v /run/user/$uid/pulse:/run/user/$uid/pulse \
-v /var/lib/dbus:/var/lib/dbus \
-v ~/.pulse:/home/$dockerUsername/.pulse \
myContainer sh -c "echo run something"
In new docker versions you might need to add --privileged.
Of course you can combine both together and use it together with xServer ui forwarding like here: https://stackoverflow.com/a/28971413/2835523
Just to mention:
you can handle most of this (all without the used id) in the dockerfile
using uid=$(id -u) to get the user id and gid with id -g
creating a docker user with this id
create user script:
mkdir -p /home/$dockerUsername && \
echo "$dockerUsername:x:${uid}:${gid}:$dockerUsername,,,:/home/$dockerUsername:/bin/bash" >> /etc/passwd && \
echo "$dockerUsername:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d && \
echo "$dockerUsername ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$dockerUsername && \
chmod 0440 /etc/sudoers.d/$dockerUsername && \
chown ${uid}:${gid} -R /home/$dockerUsername
Inspired by the links you've posted, I was able to create the following solution. It is as lightweight as I could get it. However, I'm not sure if it is (1) secure, and (2) entirely fits your use-case (as it still uses the network).
Install paprefson your host system, e.g. using sudo apt-get install paprefs on an Ubuntu machine.
Launch PulseAudio Preferences, go to the "Network Server" tab, and check the "Enable network access to local sound devices" checkbox [1]
Restart your computer. (Only restarting Pulseaudio didn't work for me on Ubuntu 14.10)
Install Pulseaudio in your container, e.g. sudo apt-get install -y pulseaudio
In your container, run export "PULSE_SERVER=tcp:<host IP address>:<host Pulseaudio port>". For example, export "PULSE_SERVER=tcp:172.16.86.13:4713" [2]. You can find out your IP address using ifconfig and the Pulseaudio port using pax11publish [1].
That's it. Step 5 should probably be automated if the IP address and Pulseaudio port are subject to change. Additionally, I'm not sure if Docker permanently stores environment variables like PULSE_SERVER: If it doesn't then you have to initialize it after each container start.
Suggestions to make my approach even better would be greatly appreciated, since I'm currently working on a similar problem as the OP.
References:
[1] https://github.com/jlund/docker-chrome-pulseaudio
[2] https://github.com/jlund/docker-chrome-pulseaudio/blob/master/Dockerfile
UPDATE (and probably the better solution):
This also works using a Unix socket instead of a TCP socket:
Start the container with -v /run/user/$UID/pulse/native:/path/to/pulseaudio/socket
In the container, run export "PULSE_SERVER=unix:/path/to/pulseaudio/socket"
The /path/to/pulseaudio/socket can be anything, for testing purposes I used /home/user/pulse.
Maybe it will even work with the same path as on the host (taking care of the $UID part) as the default socket, this way the ultimate solution would be -v /run/user/$UID/pulse/native:/run/user/<UID in container>/pulse; I haven't tested this however.
After trying most of the solutions described here I found only PulseAudio over network to be really working. However you can make it safe by keeping the authentication.
Install paprefs (on host machine):
$ apt-get install paprefs
Launch paprefs (PulseAudio Preferences) > Network Server > [X] Enable network access to local sound devices.
Restart PulseAudio:
$ service pulseaudio restart
Check it worked or restart machine:
$ (pax11publish || xprop -root PULSE_SERVER) | grep -Eo 'tcp:[^ ]*'
tcp:myhostname:4713
Now use that socket:
$ docker run \
-e PULSE_SERVER=tcp:$(hostname -i):4713 \
-e PULSE_COOKIE=/run/pulse/cookie \
-v ~/.config/pulse/cookie:/run/pulse/cookie \
...
Check that the user running inside the container has access to the cookie file ~/.config/pulse/cookie.
To test it works:
$ apt-get install mplayer
$ mplayer /usr/share/sounds/alsa/Front_Right.wav
For more info may check Docker Mopidy project.
Assuming pulseaudio is installed on host and in image, one can provide pulseaudio sound over tcp with only a few steps. pulseaudio does not need to be restarted, and no configuration has to be done on host or in image either. This way it is included in x11docker, without the need of VNC or SSH:
First, find a free tcp port:
read LOWERPORT UPPERPORT < /proc/sys/net/ipv4/ip_local_port_range
while : ; do
PULSE_PORT="`shuf -i $LOWERPORT-$UPPERPORT -n 1`"
ss -lpn | grep -q ":$PULSE_PORT " || break
done
Get ip adress of docker daemon. I always find it being 172.17.42.1/16
ip -4 -o a | grep docker0 | awk '{print $4}'
Load pulseaudio tcp module, authenticate connection to docker ip:
PULSE_MODULE_ID=$(pactl load-module module-native-protocol-tcp port=$PULSE_PORT auth-ip-acl=172.17.42.1/16)
On docker run, create environment variable PULSE_SERVER
docker run -e PULSE_SERVER=tcp:172.17.42.1:$PULSE_PORT yourimage
Afterwards, unload tcp module. (Note: for unknown reasons, unloading this module can stop pulseaudio daemon on host):
pactl unload-module $PULSE_MODULE_ID
Edit: How-To for ALSA and Pulseaudio in container
I managed to dockerize a Java game in the following ways, effectively passing through the game's sound.
This approach requires building an image, making sure the app has all the dependencies it'll need, in this case, pulseaudio and x11. If you're sure your images has everything it needs, you may procees as stated in the previous answers.
Here, we need to build the image, then we can actually launch it.
docker build -t my-unciv-image . # Run from directory where Dockerfile is
docker run --name unciv # image name\
--device /dev/dri \
-e DISPLAY=$DISPLAY \
-e PULSE_SERVER=unix:/run/user/1000/pulse/native \
--privileged \
-u $(id -u):$(id -g) \
-v /path/to/Unciv:/App \
-v /run/user/$(id -u)/pulse:/run/user/(id -u)/pulse \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-w /App \
my-unciv-image \
java -jar /App/Unciv.jar
In the second command the following is specified:
--name: a name is given to the container
--device: video device*
-e: required environment vars
DISPLAY: the display number
PULSE_SERVER: PulseAudio audio server socket
--privileged: run ip privileged*, so it can access all devices
-v: Mounted volumes:
Path to the game mounted into /App in the container**
Audio server socke
Display server socket
-w: Working directory
Here is a docker-compose.yml version of it:
# docker-compose.yml
version: '3'
services:
unciv:
build: .
container_name: unciv
devices:
- /dev/dri:/dev/dri # * Either this
entrypoint: java -jar /App/Unciv.jar
environment:
- DISPLAY=$DISPLAY
- PULSE_SERVER=unix:/run/user/1000/pulse/native
privileged: true # * or this
user: 1000:1000
volumes:
- /path/to/game/:/App
- /run/user/1000/pulse:/run/user/1000/pulse
- /tmp/.X11-unix:/tmp/.X11-unix
working_dir: /App
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install openjdk-11-jre -y
RUN apt-get install -y xserver-xorg-video-all
RUN apt-get install -y libgl1-mesa-glx libgl1-mesa-dri
RUN apt-get install -y pulseaudio
USER unciv
Notes:
*Only required for a game or anything that uses openGL. Either passing the devices explicitly or running it as privileged, but I think it's enough to pass the device, making it privileged may be overkill.
**This math may be bundled with the docker image, but for a demo.
For the audio, it's required to pass env variable PULSE_SERVER and mounting the pulseaudio socket

How to deploy Meteor and Phusion Docker to Digital Ocean with Docker?

What is a workflow for deploying to Digital Ocean with Phusion Docker and Node/Meteor support?
I tried :
FROM phusion/passenger-nodejs:0.9.10
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# ssh
ADD private/keys/akey.pub /tmp/your_key
RUN cat /tmp/your_key >> /root/.ssh/authorized_keys && rm -f /tmp/your_key
## Download shit
RUN apt-get update
RUN apt-get install -qq -y python-software-properties software-properties-common curl git build-essential
RUN npm install fibers#1.0.1
# install meteor
RUN curl https://install.meteor.com | /bin/sh
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Enable nginx
# RUN rm -f /etc/service/nginx/down
#setup app
RUN mkdir /home/app/someapp
ADD . /home/app/someapp
WORKDIR /home/app/someapp
EXPOSE 4000
CMD passenger start -p 4000
But nothing is working and then I'm not sure how to really manage update/deploy/running?
E.g, how would you also handle updating the app without rebuilding the docker image?
Here is my suggested workflow:
Create an account on Docker Hub, you can get 1 private repository for free. If you want a complete private repository hosted on your own server, you can run an entire docker registry and use it to host your images.
Create your image on your development machine (locally or on a server), then push the image to the repository using docker push
Update the image when needed and commit your changes with docker commit then push the updated image to your repository (you should properly version and tag all your images)
You can start a digital ocean droplet with docker pre-installed (from applications tab) and simply pull your image and run your container. Whenever you update and push your image from your development machine, simple pull it again from the droplet.
For large and complex infrastructure, I would recommend looking into Ansible to configure your docker containers and manage digital ocean droplet as well.
Be aware that your data will be lost if you stop the container, so consider defining a volume in your container that is mapped to a shared folder on your host machine
I suggest you test your Dockerfile in a local VirtualBox VM. I wrote a tutorial about deploying node.js app with Docker. I build several images (layers) instead of just 1. When you update your app, you just need to rebuild the top layer. Hope it helps. http://vinceyuan.blogspot.com/2015/05/deploying-web-app-redis-postgres-and.html

Resources