How to use mitmdump inside AWS CodeBuild? - http

I need to log all HTTPS traffic from an AWS CodeBuild into a file. I'm trying to do this by using mitmdump (mitmproxy cli)
This is my current buildspec.yml
version: 0.2
phases:
build:
on-failure: CONTINUE
commands:
- sudo apt-get update
- sudo apt-get -y install mitmproxy
- mitmdump -w mitmdump.txt > mitmlog.txt &
- sleep 5 # just to be sure
- curl https://github.com/
# there will be a lot of other https requests from this point which I can't control
# they will simply be placed here and executed
artifacts:
files:
- 'mitmdump.txt'
- 'mitmlog.txt'
The results are, mitmdump.txt empty and mitmlog.txt has only one line written:
Proxy server listening at http://*:8080
I've also tried:
- mitmdump --listen-port 443 -w mitmdump.txt > mitmlog.txt &
and
- mitmdump --listen-port 80 -w mitmdump.txt > mitmlog.txt &
But still same results.
What is the correct way of using mitmdump in this scenario?

Related

Curl connection refused on circleci but works on local machine

I have a circleci pipeline, and after deployment I run a smoke test to check the application status. This is the code below:
smoke-test:
docker:
- image: python:3.10.5-alpine3.16
steps:
- checkout
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl aws-cli tar gzip jq
- run:
name: Backend smoke test
command: |
export BACKEND_IP=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=UdaPeople-backend-${CIRCLE_WORKFLOW_ID:0:5}" \
'Name=instance-state-name,Values=running' \
--query 'Reservations[*].Instances[*].PublicIpAddress' \
--output text)
export API_URL="http://${BACKEND_IP}:3030/api/status"
echo "${API_URL}"
wget "${API_URL}"
if curl -s -v "${API_URL}" | grep "ok"
then
return 0
else
return 1
fi
More details:
the server I am trying to query is an ec2 instance with a security group that allows all IP addresses on port 3030
I downloaded the container I am using in circle ci and tested the curl command and wget. It works perfectly
I have made more than 30 deployments, and the result is the same
The error output from circleci shows that it actually hits the IP address.
I increased the timeout seconds and also set the retries to 5
Please what could I be missing?

Making HTTPS requests within a Docker image behind a Zscaler firewall

I'm interested in running a simple image like this behind a corporate Zscaler firewall:
FROM rocker/r-base
RUN apt-get update && apt-get install libssl-dev
CMD Rscript -e "install.packages('beepr')"
Building the image with docker build -t test . fails with errors like this:
Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: ]
I've tried some of the solutions from here but they don't work. For example:
FROM rocker/r-base
# Add local certificate to Docker
ADD ./zscaler.cer /usr/local/share/ca-certificates/zscaler.crt
# Move the certificate to the cert dir of openssl and update certificates
RUN CERT_DIR=$(openssl version -d | cut -f2 -d \")/certs ; cp /usr/local/share/ca-certificates/zscaler.crt $CERT_DIR ; update-ca-certificates
# Try making https requests
RUN apt-get update && apt-get install libssl-dev
CMD Rscript -e "install.packages('beepr')"
Same errors persist with docker build -t test .. I've read some possible solutions online but all of them continually fail either for apt-get or for installing packages with R. Is there anyone who has experienced this and found a fix?
Apparently, the current advice is slightly wrong. The certificate should not go in /etc/ssl/certs/ (which is the result of CERT_DIR=$(openssl version -d | cut -f2 -d \")/certs) but rather on CERT_DIR=/usr/local/share/ca-certificates/ (at least on this Ubuntu image). After changing that, update-ca-certificates correctly updates the certificate an all HTTPS requests are successful.
This should work now:
FROM rocker/r-base
# Add local certificate to Docker
ADD ./zscaler.pem /usr/local/share/ca-certificates/ZscalerRootCertificate-2048-SHA256.crt
# update certificates
RUN update-ca-certificates
# Try making https requests
RUN apt-get update && apt-get install libssl-dev
CMD Rscript -e "install.packages('beepr')"

Symfony 4 app works with Docker Compose but breaks with Docker Swarm (no login, profiler broken)

I'm using Docker Compose locally with:
app container: Nginx & PHP-FPM with a Symfony 4 app
PostgreSQL container
Redis container
It works great locally but when deployed to the development Docker Swarm cluster, I can't login to the Symfony app.
The Swarm stack is the same as local, except for PostgreSQL which is installed on its own server (not a Docker container).
Using the profiler, I nearly always get the following error:
Token not found
Token "2df1bb" was not found in the database.
When I display the content of the var/log/dev.log file, I get these lines about my login attempts:
[2019-07-22 10:11:14] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"http://dev.ip/public/login","method":"GET"} []
[2019-07-22 10:11:14] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} []
[2019-07-22 10:11:14] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.DEBUG: Guard authenticator does not support the request. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.INFO: Populated the TokenStorage with an anonymous Token. [] []
The only thing I may find useful here is the Guard authenticator does not support the request. message, but I have no idea what do search from there.
UPDATE:
Here is my docker-compose.dev.yml (removed redis container and changed app environment variables):
version: "3.7"
networks:
web:
driver: overlay
services:
# Symfony + Nginx
app:
image: "registry.gitlab.com/my-image"
deploy:
replicas: 2
restart_policy:
condition: on-failure
networks:
- web
ports:
- 80:80
environment:
APP_ENV: dev
DATABASE_URL: pgsql://user:pass#0.0.0.0/my-db
MAILER_URL: gmail://user#gmail.com:pass#localhost
Here is the Dockerfile.dev used to build the app image on development servers:
# Base image
FROM php:7.3-fpm-alpine
# Source code into:
WORKDIR /var/www/html
# Import Symfony + Composer
COPY --chown=www-data:www-data ./symfony .
COPY --from=composer /usr/bin/composer /usr/bin/composer
# Alpine Linux packages + PHP extensions
RUN apk update && apk add \
supervisor \
nginx \
bash \
postgresql-dev \
wget \
libzip-dev zip \
yarn \
npm \
&& apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install redis \
&& docker-php-ext-enable redis \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo_pgsql \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip \
&& composer install \
--prefer-dist \
--no-interaction \
--no-progress \
&& yarn install \
&& npm rebuild node-sass \
&& yarn encore dev \
&& mkdir -p /run/nginx
# Nginx conf + Supervisor entrypoint
COPY ./dev.conf /etc/nginx/conf.d/default.conf
COPY ./.htpasswd /etc/nginx/.htpasswd
COPY ./supervisord.conf /etc/supervisord.conf
EXPOSE 80 443
ENTRYPOINT /usr/bin/supervisord -c /etc/supervisord.conf
UPDATE 2:
I pulled my Docker images and ran the application using only the docker-compose.dev.yml (without the docker-compose.local.yml that I'd use too locally). I have been able to login, everything is okay.
So... It works with Docker Compose locally, but not in Docker Swarm on a remote server.
UPDATE 3:
I made the dev server leave the Swarm cluster and started the services using Docker Compose. It works.
The issue is about going from Compose to Swarm. I created an issue: docker/swarm #2956
Maybe it's not your specific case, but it could help some user who have problems using Docker Swarm which are not present in Docker Compose.
I've been fighting this issue for over a week. I found that the default network for Docker Compose uses the bridge driver and Docker Swarm uses Overlay.
Later, I read in the Caveats section in the Postgres Docker image repo that there's a poblem with the IPVS connection timeouts in overlay networks and it refers to this blog for solutions.
I try with the first option and changed the endpoint_mode setting to dnsrr in my docker-compose.yml file:
db:
image: postgres:12
# Others settings ...
deploy:
endpoint_mode: dnsrr
Keep in mind that there are some caveats (mentioned in the blog) to consider. However, you could try the other options.
Also in this issue maybe you find something useful as they faced the same problem.

RStudio and Shiny in one dockerfile

I am looking into docker to distribute a shiny application that also requires RStudio. The primary goal is easy installation at hospitals under Windows. Everything that requires character input into black boxes will certainly fail during installation by non-IT people.
My previous attempts used vagrant, but installing vagrant alone proved to be a hurdle.
The rocker repository, has an RStudio and a Shiny , and for my own installation both work together. However, I would like to create a combined application for easier installation.
What is the recommended workflow? Start with RStudio, and manually add Shiny?
Or merge the dockerfiles code from both Rockers, starting with r-base? Or use compose tool?
The point of Docker, in general, is isolation of services so that they can be updated/changed without effecting others. My recommendation would be to use docker-compose, instead. Below is an example docker-compose yaml file that serves both rstudio and shiny on the same server at different subdomains using the incredibly useful docker-gen by Jason Wilder. All R docker images used below are courtesy of Rocker or more directly Rocker Docker Hub. These are very very reliable because, well, Dirk Eddelbeutel and Carl Boettiger made them. In this example I've also included some options for RStudio such as setting a user/pass and whether or not the user has root access. There are more instructions on using the Rocker RStudio image available on this wiki page:
Change the following:
your_user to your username on the server
SOME_USER to your desired RStudio username
SOME_PASS to your desired Rstudio password
*.DOMAIN.tld to your domain, don't forget to add A records for your subdomains.
nginx1:
image: nginx
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- /etc/nginx/conf.d
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /home/your_user/services/volumes/proxy/certs:/etc/nginx/certs:ro
nginx-gen:
links:
- "nginx1"
image: jwilder/docker-gen
container_name: nginx-gen
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /home/your_user/services/volumes/proxy/templates/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
volumes_from:
- nginx1
entrypoint: /usr/local/bin/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
rstudio:
links:
- "nginx1"
image: rocker/hadleyverse
container_name: rstudio
ports:
- "8787:8787"
environment:
- VIRTUAL_PORT=8787
- ROOT=TRUE
- VIRTUAL_HOST=rstudio.DOMAIN.tld
- USER=SOME_USER
- PASSWORD=SOME_PASS
shiny:
links:
- "nginx1"
image: rocker/shiny
container_name: shiny
environment:
- VIRTUAL_HOST=shiny.DOMAIN.tld
volumes:
- /home/your_user/services/volumes/shiny/apps:/srv/shiny-server/
- /home/your_user/services/volumes/shiny/logs:/var/log/
- /home/your_user/services/volumes/shiny/packages:/home/shiny/
It's trivial to add more services like a blog, for example, just follow the pattern or search the internet for a docker-compose version of your service and add it.
Interesting question, but I'm not sure I understand the advantage of having the shiny-server and the rstudio-server instances served from the same container.
Is the purpose so that the two containers share the same R libraries (e.g. so a package doesn't need to be installed separately on each) or merely to have one docker container instead of two? Just having to run two docker commands instead of one doesn't seem that onerous, but maybe I'm underestimating.
Sharing the underlying libraries seems like a valid objective though, and I don't think there's an ideal solution available yet.
I feel the most docker-esque solution would be to do this via container orchestration/compose tool as you mention. This is the usual way to combine separate services (e.g. web server and database) without building one on top of the other.
Unfortunately, the tooling for orchestration based on mapping volumes is not nearly as well developed as it is for mapping ports.
Imagine running the rstudio as a volume container:
docker run --name rstudio -v /usr/local/lib/R/site.library rocker/rstudio true
(If you wanted RStudio access at the same time, one could instead run this as:)
docker run --name rstudio -dP -v /usr/local/lib/R/site.library rocker/rstudio
You can then use the the site.library from the rstudio container in place of that on the shiny container with a command like:
docker run --volumes-from rstudio -dP rocker/shiny
Unfortunately, this clobbers the site.library of the shiny container. To work around this, you'd want to mount the library of the rstudio container in a different place, but there's no easy syntax for this like we already have with port links. It can be done though, see:
How to map volume paths using Docker's --volumes-from?
There's an open thread on this issue in the rocker repo too.
I have developed a working single docker for
R
RStudio (server)
Shiny Server (free edition)
I built it exactly for the same reasons mentioned by #Dieter Menne. It may be not ideal for ops, but it great for dev (especially if the team members all use different envs. like mac, windows etc.).
It is on Centos 6 as this is the env. I use at work.
This is the dockerfile:
FROM centos:centos6.7
MAINTAINER enzo smartinsightsfromdata
RUN yum -y install epel-release
RUN yum update -y && yum clean all
# RUN yum reinstall -y glibc-common
RUN yum install -y locales java-1.7.0-openjdk-devel tar
# Misc packages
RUN yum groupinstall -y "Development Tools"
# R devtools pre-requisites:
RUN yum install -y wget git xml2 libxml2-devel curl curl-devel openssl-devel
WORKDIR /home/root
RUN yum install -y R
RUN wget http://cran.r-project.org/src/contrib/rJava_0.9-7.tar.gz
RUN R CMD INSTALL rJava_0.9-7.tar.gz
RUN R CMD javareconf \
&& rm -rf rJava_0.9-7.tar.gz
#-----------------------
# Add RStudio binaries to PATH
# export PATH="/usr/lib/rstudio-server/bin/:$PATH"
ENV PATH /usr/lib/rstudio-server/bin/:$PATH
ENV LANG en_US.UTF-8
RUN yum install -y openssl098e supervisor passwd pandoc
# RUN wget http://download2.rstudio.org/rstudio-server-rhel-0.99.484-x86_64.rpm
# Go for the bleading edge:
RUN wget https://s3.amazonaws.com/rstudio-dailybuilds/rstudio-server-rhel-0.99.697-x86_64.rpm
RUN yum -y install --nogpgcheck rstudio-server-rhel-0.99.697-x86_64.rpm \
&& rm -rf rstudio-server-rhel-0.99.484-x86_64.rpm
RUN groupadd rstudio \
&& useradd -g rstudio rstudio \
&& echo rstudio | passwd rstudio --stdin
RUN R -e "install.packages(c('shiny', 'rmarkdown'), repos='http://cran.r-project.org', INSTALL_opts='--no-html')"
RUN wget https://download3.rstudio.org/centos5.9/x86_64/shiny-server-1.4.0.756-rh5-x86_64.rpm
RUN yum -y install --nogpgcheck shiny-server-1.4.0.756-rh5-x86_64.rpm \
&& rm -rf shiny-server-1.4.0.756-rh5-x86_64.rpm
RUN mkdir -p /var/log/shiny-server \
&& chown shiny:shiny /var/log/shiny-server \
&& chown shiny:shiny -R /srv/shiny-server \
&& chmod 777 -R /srv/shiny-server \
&& chown shiny:shiny -R /opt/shiny-server/samples/sample-apps \
&& chmod 777 -R /opt/shiny-server/samples/sample-apps
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN mkdir -p /var/log/supervisor \
&& chmod 777 -R /var/log/supervisor
EXPOSE 8787 3838
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
This is how the supervisord.conf file looks like:
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
pidfile = /tmp/supervisord.pid
[program:rserver]
user=root
command=/usr/lib/rstudio-server/bin/rserver
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
startsecs=0
autorestart=false
[program:shinyserver]
user=root
command=/usr/bin/shiny-server
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
It is available at my github page: smartinsightsfromdata
I have also developed a working docker for shiny server pro on centos (using shiny server pro temporary edition, valid 45 days only).
Somewhat unfortunately, there is no definite answer, it all depends on how much reusability you would be looking for and whether an upstream base image is well maintained. The is also images size tradeoff, more layers there are, bigger the resulting image gets.

run apps using audio in a docker container

This question is inspired by Can you run GUI apps in a docker container?.
The basic idea is to run apps with audio and ui (vlc, firefox, skype, ...)
I was searching for docker containers using pulseaudio but all containers I found where using pulseaudio streaming over tcp.
(security sandboxing of the applications)
https://gist.github.com/hybris42/ce429de428e5af3a344a
https://github.com/jlund/docker-chrome-pulseaudio
https://github.com/tomparys/docker-skype-pulseaudio
In my case I would prefere playing audio from an app inside the container directly to my host pulseaudio. (without ssh tunneling and bloated docker images)
Pulseaudio because my qt app is using it ;)
it took me some time until i found out what is needed. (Ubuntu)
we start with the docker run command docker run -ti --rm myContainer sh -c "echo run something"
ALSA:
we need /dev/snd and some hardware access as it looks like.
when we put this together we have
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
myContainer sh -c "echo run something"`
In new docker versions without lxc flags you shoud use this:
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--privileged \
myContainer sh -c "echo run something"`
PULSEAUDIO:
update: it may be enought to mount the pulseaudio socket within the container using -v option. this depends on your version and prefered access method. see other answers for the socket method.
Here we need basically /dev/shm, /etc/machine-id and /run/user/$uid/pulse. But that is not all (maybe because of Ubuntu and how they did it in the past). The envirorment variable XDG_RUNTIME_DIR has to be the same in the host system and in your docker container. You may also need /var/lib/dbus because some apps are accessing the machine id from here (may only containing a symbolic link to the 'real' machine id). And at least you may need the hidden home folder ~/.pulse for some temp data (i am not sure about this).
docker run -ti --rm \
-v /dev/shm:/dev/shm \
-v /etc/machine-id:/etc/machine-id \
-v /run/user/$uid/pulse:/run/user/$uid/pulse \
-v /var/lib/dbus:/var/lib/dbus \
-v ~/.pulse:/home/$dockerUsername/.pulse \
myContainer sh -c "echo run something"
In new docker versions you might need to add --privileged.
Of course you can combine both together and use it together with xServer ui forwarding like here: https://stackoverflow.com/a/28971413/2835523
Just to mention:
you can handle most of this (all without the used id) in the dockerfile
using uid=$(id -u) to get the user id and gid with id -g
creating a docker user with this id
create user script:
mkdir -p /home/$dockerUsername && \
echo "$dockerUsername:x:${uid}:${gid}:$dockerUsername,,,:/home/$dockerUsername:/bin/bash" >> /etc/passwd && \
echo "$dockerUsername:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d && \
echo "$dockerUsername ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$dockerUsername && \
chmod 0440 /etc/sudoers.d/$dockerUsername && \
chown ${uid}:${gid} -R /home/$dockerUsername
Inspired by the links you've posted, I was able to create the following solution. It is as lightweight as I could get it. However, I'm not sure if it is (1) secure, and (2) entirely fits your use-case (as it still uses the network).
Install paprefson your host system, e.g. using sudo apt-get install paprefs on an Ubuntu machine.
Launch PulseAudio Preferences, go to the "Network Server" tab, and check the "Enable network access to local sound devices" checkbox [1]
Restart your computer. (Only restarting Pulseaudio didn't work for me on Ubuntu 14.10)
Install Pulseaudio in your container, e.g. sudo apt-get install -y pulseaudio
In your container, run export "PULSE_SERVER=tcp:<host IP address>:<host Pulseaudio port>". For example, export "PULSE_SERVER=tcp:172.16.86.13:4713" [2]. You can find out your IP address using ifconfig and the Pulseaudio port using pax11publish [1].
That's it. Step 5 should probably be automated if the IP address and Pulseaudio port are subject to change. Additionally, I'm not sure if Docker permanently stores environment variables like PULSE_SERVER: If it doesn't then you have to initialize it after each container start.
Suggestions to make my approach even better would be greatly appreciated, since I'm currently working on a similar problem as the OP.
References:
[1] https://github.com/jlund/docker-chrome-pulseaudio
[2] https://github.com/jlund/docker-chrome-pulseaudio/blob/master/Dockerfile
UPDATE (and probably the better solution):
This also works using a Unix socket instead of a TCP socket:
Start the container with -v /run/user/$UID/pulse/native:/path/to/pulseaudio/socket
In the container, run export "PULSE_SERVER=unix:/path/to/pulseaudio/socket"
The /path/to/pulseaudio/socket can be anything, for testing purposes I used /home/user/pulse.
Maybe it will even work with the same path as on the host (taking care of the $UID part) as the default socket, this way the ultimate solution would be -v /run/user/$UID/pulse/native:/run/user/<UID in container>/pulse; I haven't tested this however.
After trying most of the solutions described here I found only PulseAudio over network to be really working. However you can make it safe by keeping the authentication.
Install paprefs (on host machine):
$ apt-get install paprefs
Launch paprefs (PulseAudio Preferences) > Network Server > [X] Enable network access to local sound devices.
Restart PulseAudio:
$ service pulseaudio restart
Check it worked or restart machine:
$ (pax11publish || xprop -root PULSE_SERVER) | grep -Eo 'tcp:[^ ]*'
tcp:myhostname:4713
Now use that socket:
$ docker run \
-e PULSE_SERVER=tcp:$(hostname -i):4713 \
-e PULSE_COOKIE=/run/pulse/cookie \
-v ~/.config/pulse/cookie:/run/pulse/cookie \
...
Check that the user running inside the container has access to the cookie file ~/.config/pulse/cookie.
To test it works:
$ apt-get install mplayer
$ mplayer /usr/share/sounds/alsa/Front_Right.wav
For more info may check Docker Mopidy project.
Assuming pulseaudio is installed on host and in image, one can provide pulseaudio sound over tcp with only a few steps. pulseaudio does not need to be restarted, and no configuration has to be done on host or in image either. This way it is included in x11docker, without the need of VNC or SSH:
First, find a free tcp port:
read LOWERPORT UPPERPORT < /proc/sys/net/ipv4/ip_local_port_range
while : ; do
PULSE_PORT="`shuf -i $LOWERPORT-$UPPERPORT -n 1`"
ss -lpn | grep -q ":$PULSE_PORT " || break
done
Get ip adress of docker daemon. I always find it being 172.17.42.1/16
ip -4 -o a | grep docker0 | awk '{print $4}'
Load pulseaudio tcp module, authenticate connection to docker ip:
PULSE_MODULE_ID=$(pactl load-module module-native-protocol-tcp port=$PULSE_PORT auth-ip-acl=172.17.42.1/16)
On docker run, create environment variable PULSE_SERVER
docker run -e PULSE_SERVER=tcp:172.17.42.1:$PULSE_PORT yourimage
Afterwards, unload tcp module. (Note: for unknown reasons, unloading this module can stop pulseaudio daemon on host):
pactl unload-module $PULSE_MODULE_ID
Edit: How-To for ALSA and Pulseaudio in container
I managed to dockerize a Java game in the following ways, effectively passing through the game's sound.
This approach requires building an image, making sure the app has all the dependencies it'll need, in this case, pulseaudio and x11. If you're sure your images has everything it needs, you may procees as stated in the previous answers.
Here, we need to build the image, then we can actually launch it.
docker build -t my-unciv-image . # Run from directory where Dockerfile is
docker run --name unciv # image name\
--device /dev/dri \
-e DISPLAY=$DISPLAY \
-e PULSE_SERVER=unix:/run/user/1000/pulse/native \
--privileged \
-u $(id -u):$(id -g) \
-v /path/to/Unciv:/App \
-v /run/user/$(id -u)/pulse:/run/user/(id -u)/pulse \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-w /App \
my-unciv-image \
java -jar /App/Unciv.jar
In the second command the following is specified:
--name: a name is given to the container
--device: video device*
-e: required environment vars
DISPLAY: the display number
PULSE_SERVER: PulseAudio audio server socket
--privileged: run ip privileged*, so it can access all devices
-v: Mounted volumes:
Path to the game mounted into /App in the container**
Audio server socke
Display server socket
-w: Working directory
Here is a docker-compose.yml version of it:
# docker-compose.yml
version: '3'
services:
unciv:
build: .
container_name: unciv
devices:
- /dev/dri:/dev/dri # * Either this
entrypoint: java -jar /App/Unciv.jar
environment:
- DISPLAY=$DISPLAY
- PULSE_SERVER=unix:/run/user/1000/pulse/native
privileged: true # * or this
user: 1000:1000
volumes:
- /path/to/game/:/App
- /run/user/1000/pulse:/run/user/1000/pulse
- /tmp/.X11-unix:/tmp/.X11-unix
working_dir: /App
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install openjdk-11-jre -y
RUN apt-get install -y xserver-xorg-video-all
RUN apt-get install -y libgl1-mesa-glx libgl1-mesa-dri
RUN apt-get install -y pulseaudio
USER unciv
Notes:
*Only required for a game or anything that uses openGL. Either passing the devices explicitly or running it as privileged, but I think it's enough to pass the device, making it privileged may be overkill.
**This math may be bundled with the docker image, but for a demo.
For the audio, it's required to pass env variable PULSE_SERVER and mounting the pulseaudio socket

Resources