Rstudio via docker cannot read /etc/.odbc.ini, only ~/.odbc.ini - r

When I build and then run a Docker container which runs rstudio on Ubuntu, the odbc connection does not work when I add the odbc.ini file during the build. However, if I leave out the odbc.ini file from the build and instead add it myself from within the running container, the connection does indeed work.
So my problem is that I am trying to get the odbc connection up and running out of the box whenever this image is run, without the additional step of having to login to the ubuntu container instance and add connection details to the odbc.ini file.
Here's what the odbc.ini file looks like, with dummy data:
[PostgreSQL ANSI]
Driver = PostgreSQL ANSI
Database = GoogleData
Servername = somename.postgres.database.azure.com
UserName = docker_rstudio#somename
Password = abc123abc
Port = 5432
sslmode = require
I have a copy of this file, odbc.ini, in my repo directory and then include it in the build. My DockerFile.
FROM rocker/tidyverse:3.6.3
ENV ADD=SHINY
ENV ROOT=TRUE
ENV PASSWORD='abc123'
RUN apt-get update && apt-get install -y \
less \
vim \
unixodbc unixodbc-dev \
odbc-postgresql
ADD odbc.ini /etc/odbc.ini
ADD install_packages.R /tmp/install_packages.R
RUN Rscript /tmp/install_packages.R && rm -R /tmp/*
ADD flagship_ecommerce /home/rstudio/blah/zprojects/flagship_ecommerce
ADD commission_junction /home/rstudio/blah/zprojects/commission_junction
RUN mkdir /srv/shiny-server; ln -s /home/rstudio/blah/zprojects/ /srv/shiny-server/
If I then login to the instance via rstudio, the connection does not work, I get this error message:
Error: nanodbc/nanodbc.cpp:983: 00000: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
If I take a look at the file with less /etc/odbc.ini I do indeed see the connection details per my top code block.
If I then copy to home with cp /etc/odbc.ini /home/rstudio/.odbc.ini then, after that, my connection does work.
But, even if I amend my dockerfile with ADD odbc.ini /home/rstudio/.odbc.ini, the connection doesn't work. It only works when I manually add to /home/rstudio/.odbc.ini.
So my problem is two fold:
No matter what I try I cannot get /etc/odbc.ini to be detected by ubuntu to use as odbc connection string. Whether via Dockerfile or by manually adding it. I would prefer this since I want to connection to be available to anyone using the container.
I am able to get a connection when I manually copy whats in odbc.ini above to /home/rstudio/.odbc.ini however if I try to do this via the docker build, the connection does not work. I do see the file there. It exists with all the correct data, it is just not detected by odbc.
In case it's relevant:
odbcinst -j
unixODBC 2.3.6
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/rstudio/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8

I believe the problem is with the format of your /etc/odbc.ini. I don't have all your scripts, but this is the Dockerfile I used:
FROM rocker/tidyverse:3.6.3
ENV ADD=SHINY
ENV ROOT=TRUE
ENV PASSWORD='abc123'
RUN apt-get update && apt-get install -y \
less \
vim \
unixodbc unixodbc-dev \
odbc-postgresql
RUN Rscript -e 'install.packages(c("DBI","odbc"))'
ADD ./odbc.ini /etc/odbc.ini
If I use an odbc.ini of this:
[mydb]
Driver = PostgreSQL ANSI
ServerName = 127.0.0.1
UserName = postgres
Password = mysecretpassword
Port = 35432
I see this (docker build and R startup messages truncated):
$ docker build -t quux2 .
$ docker run --net='host' -it --rm quux2 bash
> con <- DBI::dbConnect(odbc::odbc(), "mydb")
Error: nanodbc/nanodbc.cpp:983: 00000: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I changed the indentation of the file to this:
[mydb]
Driver = PostgreSQL ANSI
ServerName = 127.0.0.1
UserName = postgres
Password = mysecretpassword
Port = 35432
I see this:
$ docker build -t quux3 .
$ docker run --net='host' -it --rm quux3 bash
> con <- DBI::dbConnect(odbc::odbc(), "mydb")
> DBI::dbGetQuery(con, "select 1 as a")
a
1 1
(For this demonstration, I'm running postgres:11 as another container, but I don't think that that's relevant, it's the indented values.)

I am no expert in docker, and have failed to find the specific documentation for this. But from experience it seems that every time you add a new layer (eg. using RUN) any copy from previous layers are "forgotten" (Note: this might be completely wrong, if so please someone correct me and specify the documentation).
So I would try to combine your RUN arguments and add every file right before the RUN statement they're needed. This has the added benefit of reducing the final image size, because of the way layers are created and kept.
FROM rocker/tidyverse:3.6.3
ENV ADD=SHINY
ENV ROOT=TRUE
ENV PASSWORD='abc123'
#add files (could also combine them into a single tar file and add it. Or add it via git, which is often used)
ADD odbc.ini /etc/odbc.ini
ADD install_packages.R /tmp/install_packages.R
ADD flagship_ecommerce /home/rstudio/blah/zprojects/flagship_ecommerce
ADD commission_junction /home/rstudio/blah/zprojects/commission_junction
#Combine all runs into a single statement
RUN apt-get update && apt-get install -y \
less \
vim \
unixodbc unixodbc-dev \
odbc-postgresql \
&& Rscript /tmp/install_packages.R \
&& rm -R /tmp/* \
&& mkdir /srv/shiny-server \
&& ln -s /home/rstudio/blah/zprojects/ /srv/shiny-server/
Note that now add technically comes right before the statement where it is used.

Related

Application logs to stdout with Shiny Server and Docker

I have a Docker container running a shiny app (Dockerfile here).
Shiny server logs are output to stdout and application logs are written to /var/log/shiny-server. I'm deploying this container to AWS Fargate and logging applications only display stdout which makes debugging an application when deployed challenging. I'd like to write the application logs to stdout.
I've tried a number of potential solutions:
I've tried the solution provided here, but have had no luck.. I added the exec xtail /var/log/shiny-server/ to my shiny-server.sh as the last line in the file. App logs are not written to stdout
I noticed that writing application logs to stdout is now the default behavior in rocker/shiny, but as I'm using rocker/verse:3.6.2 (upgraded from 3.6.0 today) along with RUN export ADD=shiny, I don't think this is standard behavior for the rocker/verse:3.6.2 container with Shiny add-on. As a result, I don't get the default behavior out of the box.
This issue on github suggests an alternative method of forcing application logging to stdout by way of an environment variable SHINY_LOG_STDERR=1 set at runtime but I'm not Linux-savvy enough to know where that env variable needs to be set to be effective. I found this documentation from Shiny Server v1.5.13 which gave suggestions in which file to set the environment variable depending on Linux distro; however, the output from my container when I run cat /etc/os-release is:
which doesn't really line up with any of the distributions in the Shiny Server documentation, thus making the documentation unhelpful.
I tried adding adding the environment variable from the github issue above in the docker run command, i.e.,
docker run --rm -e SHINY_LOG_STDERR=1 -p 3838:3838 [my image]
as well as
docker run --rm -e APPLICATION_LOGS_TO_STDOUT=true -p 3838:3838 [my image]
and am still not getting the logs to stdout.
I must be missing something here. Can someone help me identify how to successfully get application logs to stdout successfully?
You can add the line ENV SHINY_LOG_STDERR=1 to your Dockerfile (at least, this works with rocker/shiny, not sure about rocker/verse), such as with your Dockerfile:
FROM rocker/verse:3.6.2
## Add shiny capabilities to container
RUN export ADD=shiny && bash /etc/cont-init.d/add
## Install curl and xtail
RUN apt-get update && apt-get install -y \
curl \
xtail
## Add pip3 and other Python packages
RUN sudo apt-get update -y && apt-get install -y python3-pip
RUN pip3 install boto3
## Add R packages
RUN R -e "install.packages(c('shiny', 'tidyverse', 'tidyselect', 'knitr', 'rmarkdown', 'jsonlite', 'odbc', 'dbplyr', 'RMySQL', 'DBI', 'pander', 'sciplot', 'lubridate', 'zoo', 'stringr', 'stringi', 'openxlsx', 'promises', 'future', 'scales', 'ggplot2', 'zip', 'Cairo', 'tinytex', 'reticulate'), repos = 'https://cran.rstudio.com/')"
## Update and install
RUN tlmgr update --self --all
RUN tlmgr install ms
RUN tlmgr install beamer
RUN tlmgr install pgf
#Copy app dir and theme dirs to their respective locations
COPY iarr /srv/shiny-server/iarr
COPY iarr/reports/interim_annual_report/theme/SwCustom /opt/TinyTeX/texmf-dist/tex/latex/beamer/
#Force texlive to find my custom beamer theme
RUN texhash
EXPOSE 3838
## Add shiny-server information
COPY shiny-server.sh /usr/bin/shiny-server.sh
COPY shiny-customized.config /etc/shiny-server/shiny-server.conf
## Add dos2unix to eliminate Win-style line-endings and run
RUN apt-get update -y && apt-get install -y dos2unix
RUN dos2unix /usr/bin/shiny-server.sh && apt-get --purge remove -y dos2unix && rm -rf /var/lib/apt/lists/*
# Enable Logging from stdout
ENV SHINY_LOG_STDERR=1
RUN ["chmod", "+x", "/usr/bin/shiny-server.sh"]
CMD ["/usr/bin/shiny-server.sh"]

Docker File does not exist:.. ,on Ubuntu

I have an R plumber server, that i want to run using a docker container, and i have this configuration so far in my dockerfile
FROM rocker/r-ver:3.5.0
#update OS and install linux libraries needed to run plumber
RUN apt-get update -qq && apt-get install -y \
libssl-dev \
libcurl4-gnutls-dev
#load in dependencies from 00_Libraries.R file
RUN R -e "install.packages('plumber')"
#Copy all files from current directory
COPY / /
#Expose port :80 for traffic
EXPOSE 80
#when the container starts, start the runscript.R script
ENTRYPOINT ["Rscript", "runscript.R"]
in my runscript.R file, i have my server configuration like this:
pr <- plumber::plumb("/home/kristoffer/Desktop/plumber-api/rfiles/plumber.R")$run(port=8000)
whenever i try to run the docker image, i get this error:
File does not exist: /home/kristoffer/Desktop/plumber-api/rfiles/plumber.R
Execution halted
i have ensured, that all the necessary files are located in the right directory.
EDIT:
i included an image of all the files i have in my directory, to ensure, that the dockerfile is in the same directory as my other files
you are copying everything under / change your copy command to:
COPY . .
and make sure that Dockerfile is in the same directory.
beside this :
plumber::plumb("/home/kristoffer/Desktop/plumber-api/rfiles/plumber.R")
will also not work since the path is not in your container change it to :
plumber::plumb("plumber.R")
if this file is in the same directory

How to find error logs when my dockerized shiny app does not work

I'm trying to put my shiny app in docker container. My shiny app works totally fine on my local computer. But after dockerize my shiny app, I always have error message on my localhost like The application failed to start. The application exited during initialization..
I have no idea why that happens. I'm new to docker. How can I find the error logs when I run the docker image? I need the log to know what goes wrong.
Here is my dockfile:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
# TODO: add further package if you need!
RUN R -e "install.packages(c( 'tidyverse', 'ggplot2','shiny','shinydashboard', 'DT', 'plotly', 'RColorBrewer'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
I built image and ran like below:
docker build -t myshinyapp .
docker run -p 80:80 myshinyapp
Usually the logs for any (live or dead) container can be found by just using:
docker logs full-container-name
or
docker logs CONTAINERID
(replacing the actual ID of your container)
As first said, this usually works as well even for stopped (not still removed) containers, which you can list with:
docker container ls -a
or just
docker ps -a
However, sometimes you won't even have a log, since the container was never created at all (which I think, by experience, fits more to your case)
And it can be happening simply because the docker engine is unable to allocate all of the resources that your service definition is requiring to have available.
The application failed to start. The application exited during initialization
is usually reflect of your docker engine being unable to get the required resources.
And the most common case for that, is just as simple as your host ports:
If you have another service (being dockerized or not) using (for example) that port that you want to use for your service (in your case, port 80) then Docker would just be unable to start your container.
So... in short... the easiest fix for that situation (and your first try whenever you face this kind of issues) is just to bind any other port from your host (say: 8080), to that 80 port that your service will be listening to internally (inside your container):
docker run -p 8080:80 myshinyapp
The same principle applies to unallocatable volumes (e.g.: trying to bind a volume as read-only that doesn't actually exist in the host)
As an aside comment/trick:
Since you're not setting a name for your container, you will need to use the container id instead when looking for its logs.
But instead of typing (or copy-pasting) the full container id (usually something like: 1283c66babea or even larger) you can just type in a few first digits instead, and it will still work as expected:
docker logs 1283c6 or docker logs 1283 or even docker logs 128
(of course... as long as you don't have any other 128***** container)

How to Choose R Server's R as Default in Operationalization, Remote R Workspace and RStudio Server?

So I've set up an Azure Data Science Virtual Machine on Linux (Ubuntu) and I've executed the following on the terminal to enable Remote R workspace, RStudio Server, R Server Operationalization and hadoop:
sudo apt update
sudo apt -y upgrade
# Hadoop is installed but doesn't seem to appear on the PATH or have its environment variable set by default
sudo echo "" >> ~/.bashrc
sudo echo "export PATH="'$'"PATH:/opt/hadoop/hadoop-2.7.4/bin" >> ~/.bashrc
sudo echo "export HADOOP_HOME=/opt/hadoop/hadoop-2.7.4" >> ~/.bashrc
#
source ~/.bashrc
#Setting up a password as none exists to begin with because of private key selection in the installation
#RStudio Server requires a password though
"MyPassword\nMyPassword\n" | sudo passwd sshuser
#Unfortunately hadoop fails on Data Science Virtual Machine
#error: mkdir: Call From IM-DSonUbuntu/192.168.5.4 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
# hadoop fs -mkdir /user/RevoShare/rserve2
# hadoop fs -chmod uog+rwx /user/RevoShare/rserve2
sudo mkdir -p /var/RevoShare/rserve2
sudo chmod uog+rwx /var/RevoShare/rserve2
# hadoop fs -mkdir /user/RevoShare/sshuser
# hadoop fs -chmod uog+rwx /user/RevoShare/sshuser
sudo mkdir -p /var/RevoShare/sshuser
sudo chmod uog+rwx /var/RevoShare/sshuser
#Setting up R Server Operationalisation
cd /opt/microsoft/mlserver/9.2.1/o16n
sudo dotnet Microsoft.MLServer.Utils.AdminUtil/Microsoft.MLServer.Utils.AdminUtil.dll -silentoneboxinstall MyPassword
#They say this Data Science Virtual Machine already has RStudio Server, but even though the port 8787 is open, it's nowhere to be found! So installing it now, and after the installation it's accessible by refreshing the page that failed before.
#Perhaps it's not installed then? Or a service is not running like it shoudl?
#https://www.rstudio.com/products/rstudio/download-server/
wget https://download2.rstudio.org/rstudio-server-1.1.414-amd64.deb
yes | sudo gdebi rstudio-server-1.1.414-amd64.deb
#They are small, leave them for debug reasons - lets have evidence the script run thus far.
#sudo rm rstudio-server-1.1.414-amd64.deb
# Remote R workspace Service needs dotnet sdk
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt update
sudo apt -y install dotnet-sdk-2.0.0
sudo apt install libxml2-dev
#Downloading and installing the Remote R service
wget -O rtvs-daemon.tar.gz https://aka.ms/r-remote-services-linux-binary-current
tar -xvzf rtvs-daemon.tar.gz
sudo ./rtvs-install -s
sudo systemctl enable rtvsd
sudo systemctl start rtvsd
#sudo rm rtvs-daemon.tar.gz
#sudo rm rtvs-install
#Fixing Remote R: For some reason, even though 'sudo systemctl enable rtvsd' runs, after every reboot the service won't become automatically active. So let's fix that.
wget https://sa0im0general.blob.core.windows.net/general-blob-container/StartRemoteRAfterReboot.sh
sudo mv StartRemoteRAfterReboot.sh /var/RevoShare/StartRemoteRAfterReboot.sh
sudo /sbin/shutdown -r 5
sudo chown root /etc/rc.local
sudo chmod 755 /etc/rc.local
sudo systemctl enable rc-local.service
sudo -s
sudo find /etc/ -name "rc.local" -exec sed -i 's/exit 0//g' {} \;
sudo echo "" >> /etc/rc.local
sudo echo "sh /var/RevoShare/StartRemoteRAfterReboot.sh" >> /etc/rc.local
sudo echo "exit 0" >> /etc/rc.local
exit
I've also tried, one by one, these, to see if it makes any difference to the RStudio Server (it didn't, but even if it did, I want a global solution to work on Remote R Workspace Service and R Server Operationalisation as well, not only RStudio Server):
#Configuring RStudio Server to see the R Server R
sudo echo "rsession-which-r=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> /etc/rstudio/rserver.conf
export RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.profile
source ~/.profile
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.bashrc
source ~/.bashrc
sudo echo "PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R" >> ~/.bashrc
export PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R
source ~/.bashrc
The problem is that even though "which R" points to R Server's R, i.e. typing "sudo R" will show the message "Loading Microsoft R Server packages, version 9.2.1." and will load packages like RevoScaleR, everything else fails to do so.
Accessing the RStudio Server with http://THE-IP-GOES-HERE.westeurope.cloudapp.azure.com:8787 and logging in with the initial user ("sshuser") (or with any other user for that matter) will NOT load R Server and RevoScaleR rx functions are unavailable
Using my local Visual Studio 2017 to access the remote workspace via "Add connection" on "Workspaces" tab loads MRO and says:
Installed R versions:
[0] Microsoft R Open '3.4.1.1347' (Default)
And finally, when I use R Server's Operationalisation and log in with "mrsdeploy" package's "remoteLogin()" R Server packages like RevoScaleR are not loaded again, so things like "rxSummary(~., data=iris)" fail with error 'could not find function "rxSummary"'
The exact same thing happened when I deployed from azure a "Machine Learning Server 9.2.1 on Linux (Ubuntu)".
I don't want to just use the regular open source R, I want to be able to use the R Server - that's why I deployed this VM. How can I make it so that everything loads R Server's R, not Microsoft R Open? (Like I'm able to do from terminal using "R")
As a result of my having tried all of this and the fact that R Server is loaded in the console, my mind now goes to permissions. Could it be that by default the Data Science VM doesn't have the correct permissions to allow these?
I'm at a loss
RStudio Server is installed on the Ubuntu DSVM, but the service is disabled by default as it does not support SSL. You can enable it with systemctl enable rstudio-server, then start it with systemctl start rstudio-server.
RStudio Server uses the same R as Microsoft R Server, but the .libPaths are different, which is why you cannot load the MRS packages. You will need to manually set the .libPaths so they match.

run apps using audio in a docker container

This question is inspired by Can you run GUI apps in a docker container?.
The basic idea is to run apps with audio and ui (vlc, firefox, skype, ...)
I was searching for docker containers using pulseaudio but all containers I found where using pulseaudio streaming over tcp.
(security sandboxing of the applications)
https://gist.github.com/hybris42/ce429de428e5af3a344a
https://github.com/jlund/docker-chrome-pulseaudio
https://github.com/tomparys/docker-skype-pulseaudio
In my case I would prefere playing audio from an app inside the container directly to my host pulseaudio. (without ssh tunneling and bloated docker images)
Pulseaudio because my qt app is using it ;)
it took me some time until i found out what is needed. (Ubuntu)
we start with the docker run command docker run -ti --rm myContainer sh -c "echo run something"
ALSA:
we need /dev/snd and some hardware access as it looks like.
when we put this together we have
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
myContainer sh -c "echo run something"`
In new docker versions without lxc flags you shoud use this:
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--privileged \
myContainer sh -c "echo run something"`
PULSEAUDIO:
update: it may be enought to mount the pulseaudio socket within the container using -v option. this depends on your version and prefered access method. see other answers for the socket method.
Here we need basically /dev/shm, /etc/machine-id and /run/user/$uid/pulse. But that is not all (maybe because of Ubuntu and how they did it in the past). The envirorment variable XDG_RUNTIME_DIR has to be the same in the host system and in your docker container. You may also need /var/lib/dbus because some apps are accessing the machine id from here (may only containing a symbolic link to the 'real' machine id). And at least you may need the hidden home folder ~/.pulse for some temp data (i am not sure about this).
docker run -ti --rm \
-v /dev/shm:/dev/shm \
-v /etc/machine-id:/etc/machine-id \
-v /run/user/$uid/pulse:/run/user/$uid/pulse \
-v /var/lib/dbus:/var/lib/dbus \
-v ~/.pulse:/home/$dockerUsername/.pulse \
myContainer sh -c "echo run something"
In new docker versions you might need to add --privileged.
Of course you can combine both together and use it together with xServer ui forwarding like here: https://stackoverflow.com/a/28971413/2835523
Just to mention:
you can handle most of this (all without the used id) in the dockerfile
using uid=$(id -u) to get the user id and gid with id -g
creating a docker user with this id
create user script:
mkdir -p /home/$dockerUsername && \
echo "$dockerUsername:x:${uid}:${gid}:$dockerUsername,,,:/home/$dockerUsername:/bin/bash" >> /etc/passwd && \
echo "$dockerUsername:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d && \
echo "$dockerUsername ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$dockerUsername && \
chmod 0440 /etc/sudoers.d/$dockerUsername && \
chown ${uid}:${gid} -R /home/$dockerUsername
Inspired by the links you've posted, I was able to create the following solution. It is as lightweight as I could get it. However, I'm not sure if it is (1) secure, and (2) entirely fits your use-case (as it still uses the network).
Install paprefson your host system, e.g. using sudo apt-get install paprefs on an Ubuntu machine.
Launch PulseAudio Preferences, go to the "Network Server" tab, and check the "Enable network access to local sound devices" checkbox [1]
Restart your computer. (Only restarting Pulseaudio didn't work for me on Ubuntu 14.10)
Install Pulseaudio in your container, e.g. sudo apt-get install -y pulseaudio
In your container, run export "PULSE_SERVER=tcp:<host IP address>:<host Pulseaudio port>". For example, export "PULSE_SERVER=tcp:172.16.86.13:4713" [2]. You can find out your IP address using ifconfig and the Pulseaudio port using pax11publish [1].
That's it. Step 5 should probably be automated if the IP address and Pulseaudio port are subject to change. Additionally, I'm not sure if Docker permanently stores environment variables like PULSE_SERVER: If it doesn't then you have to initialize it after each container start.
Suggestions to make my approach even better would be greatly appreciated, since I'm currently working on a similar problem as the OP.
References:
[1] https://github.com/jlund/docker-chrome-pulseaudio
[2] https://github.com/jlund/docker-chrome-pulseaudio/blob/master/Dockerfile
UPDATE (and probably the better solution):
This also works using a Unix socket instead of a TCP socket:
Start the container with -v /run/user/$UID/pulse/native:/path/to/pulseaudio/socket
In the container, run export "PULSE_SERVER=unix:/path/to/pulseaudio/socket"
The /path/to/pulseaudio/socket can be anything, for testing purposes I used /home/user/pulse.
Maybe it will even work with the same path as on the host (taking care of the $UID part) as the default socket, this way the ultimate solution would be -v /run/user/$UID/pulse/native:/run/user/<UID in container>/pulse; I haven't tested this however.
After trying most of the solutions described here I found only PulseAudio over network to be really working. However you can make it safe by keeping the authentication.
Install paprefs (on host machine):
$ apt-get install paprefs
Launch paprefs (PulseAudio Preferences) > Network Server > [X] Enable network access to local sound devices.
Restart PulseAudio:
$ service pulseaudio restart
Check it worked or restart machine:
$ (pax11publish || xprop -root PULSE_SERVER) | grep -Eo 'tcp:[^ ]*'
tcp:myhostname:4713
Now use that socket:
$ docker run \
-e PULSE_SERVER=tcp:$(hostname -i):4713 \
-e PULSE_COOKIE=/run/pulse/cookie \
-v ~/.config/pulse/cookie:/run/pulse/cookie \
...
Check that the user running inside the container has access to the cookie file ~/.config/pulse/cookie.
To test it works:
$ apt-get install mplayer
$ mplayer /usr/share/sounds/alsa/Front_Right.wav
For more info may check Docker Mopidy project.
Assuming pulseaudio is installed on host and in image, one can provide pulseaudio sound over tcp with only a few steps. pulseaudio does not need to be restarted, and no configuration has to be done on host or in image either. This way it is included in x11docker, without the need of VNC or SSH:
First, find a free tcp port:
read LOWERPORT UPPERPORT < /proc/sys/net/ipv4/ip_local_port_range
while : ; do
PULSE_PORT="`shuf -i $LOWERPORT-$UPPERPORT -n 1`"
ss -lpn | grep -q ":$PULSE_PORT " || break
done
Get ip adress of docker daemon. I always find it being 172.17.42.1/16
ip -4 -o a | grep docker0 | awk '{print $4}'
Load pulseaudio tcp module, authenticate connection to docker ip:
PULSE_MODULE_ID=$(pactl load-module module-native-protocol-tcp port=$PULSE_PORT auth-ip-acl=172.17.42.1/16)
On docker run, create environment variable PULSE_SERVER
docker run -e PULSE_SERVER=tcp:172.17.42.1:$PULSE_PORT yourimage
Afterwards, unload tcp module. (Note: for unknown reasons, unloading this module can stop pulseaudio daemon on host):
pactl unload-module $PULSE_MODULE_ID
Edit: How-To for ALSA and Pulseaudio in container
I managed to dockerize a Java game in the following ways, effectively passing through the game's sound.
This approach requires building an image, making sure the app has all the dependencies it'll need, in this case, pulseaudio and x11. If you're sure your images has everything it needs, you may procees as stated in the previous answers.
Here, we need to build the image, then we can actually launch it.
docker build -t my-unciv-image . # Run from directory where Dockerfile is
docker run --name unciv # image name\
--device /dev/dri \
-e DISPLAY=$DISPLAY \
-e PULSE_SERVER=unix:/run/user/1000/pulse/native \
--privileged \
-u $(id -u):$(id -g) \
-v /path/to/Unciv:/App \
-v /run/user/$(id -u)/pulse:/run/user/(id -u)/pulse \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-w /App \
my-unciv-image \
java -jar /App/Unciv.jar
In the second command the following is specified:
--name: a name is given to the container
--device: video device*
-e: required environment vars
DISPLAY: the display number
PULSE_SERVER: PulseAudio audio server socket
--privileged: run ip privileged*, so it can access all devices
-v: Mounted volumes:
Path to the game mounted into /App in the container**
Audio server socke
Display server socket
-w: Working directory
Here is a docker-compose.yml version of it:
# docker-compose.yml
version: '3'
services:
unciv:
build: .
container_name: unciv
devices:
- /dev/dri:/dev/dri # * Either this
entrypoint: java -jar /App/Unciv.jar
environment:
- DISPLAY=$DISPLAY
- PULSE_SERVER=unix:/run/user/1000/pulse/native
privileged: true # * or this
user: 1000:1000
volumes:
- /path/to/game/:/App
- /run/user/1000/pulse:/run/user/1000/pulse
- /tmp/.X11-unix:/tmp/.X11-unix
working_dir: /App
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install openjdk-11-jre -y
RUN apt-get install -y xserver-xorg-video-all
RUN apt-get install -y libgl1-mesa-glx libgl1-mesa-dri
RUN apt-get install -y pulseaudio
USER unciv
Notes:
*Only required for a game or anything that uses openGL. Either passing the devices explicitly or running it as privileged, but I think it's enough to pass the device, making it privileged may be overkill.
**This math may be bundled with the docker image, but for a demo.
For the audio, it's required to pass env variable PULSE_SERVER and mounting the pulseaudio socket

Resources