Using ENV variables in daemonized Docker running RStudio - r

I am able to set up a Dockerfile with default ENV variables that I can then configure when running my docker container, e.g. in a Dockerfile I have the lines:
ENV USERNAME ropensci
ENV EMAIL ropensci#github.com
RUN git config --global user.name $USERNAME
RUN git config --global user.email $EMAIL
Great. When I launch an interactive session:
docker run -it --env USERNAME="Carl" --env EMAIL=cboettig#example.com myimage /bin/bash
I can then issue the command git config --list and see that git is configured to use the values I provided on the command line instead of the defaults.
However, my Dockerfile is also configured to run an RStudio server that I can then log into in the browser when running the image in Daemon mode:
docker run -d -p 8787:8787 --env USERNAME="Carl" --env EMAIL=cboettig#example.com cboettig/ropensci-docker
I go to localhost:8787 and log in to RStudio which all works as expected, start a new "Project" with git enabled, but then RStudio cannot find my git name & email. I can open the shell from the RStudio menu and run git config --list or echo $USERNAME and I just get a blank value. Why does this work for /bin/bash but not from RStudio and how do I fix it?

Your git config is set on /.gitconfig. This config file is for root user. You need to set git config for rstudio user because rstudio run on rstudio user. Below command is a temporary solution.
docker run -it -p 8787:8787 --env USERNAME="Carl" --env EMAIL=cboettig#example.com cboettig/ropensci-docker bash -c "cp /.gitconfig /home/rstudio; /usr/bin/supervisord"
It works!
Another solution is writing Dockerfile is based on cboettig/ropensci-docker. Below is sample Dockerfile.
FROM cboettig/ropensci-docker
RUN cp /.gitconfig /home/rstudio
CMD ["/usr/bin/supervisord"]

Related

podman mounted volume issue

Bottom line: output from container is not appearing in mounted local directory
I have read the documentation for bind mounts and on another project had success with this.
My docker file:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -yq build-essential autoconf libnetcdf-dev libxml2-dev libproj-dev valgrind wget unzip git nano
# pulls ADBM from github and unzips in folder ADMBcode
RUN mkdir /ADMBcode
RUN wget https://github.com/admb-project/admb/releases/download/admb-12.2/admb-12.2-linux.zip
RUN mv admb-12.2-linux.zip /ADMBcode
RUN unzip ADMBcode/admb-12.2-linux.zip -d /ADMBcode
# pulls hydra repo from github into folder HYDRA
RUN mkdir /HYDRA
RUN git clone https://github.com/NOAA-EDAB/hydra_sim.git /HYDRA
# compiles and runs model
WORKDIR /HYDRA
RUN /ADMBcode/admb-12.2/admb hydra_sim.tpl
RUN ./hydra_sim
# create dir for output and move output
#RUN mkdir -p /HYDRA/output/diagnostics
#RUN mkdir /HYDRA/output/indices
# moves output to folder to be mounted
RUN mv *.out /HYDRA/output/diagnostics
RUN mv *.txt /HYDRA/output/indices
I build the image
podman build -t hydra .
and run the container using the following :
podman run --rm --name hydra --mount "type=bind,src=/path_on_local_machine/test,dst=/HYDRA/output" hydra
I have test folder on my local machine but the output is not mounted.
I have entered the container
podman run -it hydra
and checked that the output is there
I have done this before for another model and everything behaved. Not sure why this is not.
Any ideas what i am doing wrong?
Thanks
However

css file integrity check fails after docker build

I have the following issue.
Failed to find a valid digest in the 'integrity' attribute for resource 'http://127.0.0.1:8080/uistatic/css/bootstrap4.0.0.min.css' with computed SHA-256 integrity 'xLbtJkVRnsLBKLrbKi53IAUvhEH/qUxPC87KAjEQBNo='. The resource has been blocked.
And this is happening when i put my site into docker with building this Dockerfile:
FROM python:3.6
COPY skfront /app
WORKDIR /app
RUN mkdir -p /static/resources
RUN mkdir logs
RUN mkdir certs
RUN pip3 install -r requirements.txt
RUN python3 manage.py collectstatic --settings blog_site.settings
EXPOSE 8080
CMD python3 website.py -l 0.0.0.0 -p 8080
This css is static file that doesn't change.
Any ideas why this is happening?
I found out that building Dockerfile on Windows can mess-up the final image.
When I build my site under Linux there ware no integrity errors.

Cannot copy intermediate docker container files to host

I have a Dockerfile, it does dotnet publish and the dll's are copied to intermediate docker container. I would like to copy the dll's which are generated in container to my local system (Host) as well.
I believe we can use "cp" command to do that, but I am not able to find a solution to get the intermediate container Id to use the "cp" command.
syntax: docker cp CONTAINER:Container_Path Host_Path.
Please suggest me any other better solution for this scenario.
Dockerfile:
FROM microsoft/aspnetcore-build:1.1.4 as builder
COPY . /Code
RUN dotnet restore /Code/MyProj.csproj
RUN dotnet publish -c Release /Code/MyProj.csproj
RUN cp CONTAINER: /Code/bin/Release/netcoreapp1.1/publish /binaries
Thanks.
This answer is outside of the Dockerfile.
first your Dockerfile would have to have a volume.
[VOLUME] /my/path/in/container
To get files into and out of a volume, try using tar -cvf and tar -xvf to put and get files between a container and a host.
To put files from host's newfiles.tar in pwd to a container at /var/lib/neo4j/conf mount.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -xf /newfiles/newfiles.tar"
To get files from into a container at /my/path/in/container mount to a host oldfiles.tar.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -cf /newfiles/origfiles.tar"
The --user 1000:1000 is optional if your container has a user with uid of 1000.

How do you rsync build files from Gitlab CI to another server

It's unclear to me how to get my build files from the Gitlab CI (hosted on https://ci.gitlab.com) over to my personal server using rsync.
I have setup 1 test and 1 deploy job.
Under the deploy tab I have inputed the bash commands to:
Install rsync
Update packages
Finally, the rsync command to
transfer files over SSH to my personal server.
When I enter the SSH credentials (with verbose flag on) for my private personal server, it would appear that the SSH key is the issue. In Gitlab, I have already established the deploy key (for hooks - tested this and it works).
Where do I locate the public SSH key for the Gitlab deploy instance so that I can install that key on my server?
Below is the exact script entered in Gitlab CI deploy job script pane:
# Run as root
(
set -e
set -u
set -x
apt-get update -y
apt-get -y install rsync
)
git clone https://github.com/bla/deployments.git $HOME/deploy/deployments
SVR_WEB1_WEBSERVER="000.11.22.333"
USER1="franklin"
GROUP1="team1"
FROM_DIR="/gitlab-ci-runner/tmp/builds/myrepo-1/"
DEST1="subdomains/gitlab/myrepo"
EXCLUSIONS_LIST="${HOME}/deploy/deployments/exclusions/exclusions.txt"
ssh -v "$USER1#$SVR_WEB1_WEBSERVER"
/usr/bin/rsync -avzh --progress --delete -e ssh --group=$GROUP1 -p --exclude-from "$EXCLUSIONS_LIST" "$FROM_DIR" "$USER1#$SVR_WEB1_WEBSERVER:$DEST1"
Providing your private ssh key is dangerous unless you use your own gitlab-ci runners for deployment. That's why it is better to use rsync modules.

how can I set the working directory in old version of docker in the run command?

I am a bit new to docker and I have been trying to run deploy a meteor container with my meteor application. I have been using the dockerfile and instructions from https://registry.hub.docker.com/u/golden/meteor-dev/
However, I cant run docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application -w /opt/application meteor-dev because my docker (version 0.5.3) does not recognize the flag (-w) to set the working directory.
Is there some workaround to set the working directory with docker 0.5.3? The work directory is already set in the docker file, but I guess I need to set it again when I run the container.
well, my workaround was to create a bash script that would go to the working directory and call the commands one by one. I created the bash script where my source is located "/path/to/meteor/app" and call docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application meteor-dev bash /opt/application/start.sh with the bash as command and my script as argument

Resources