Docker shows inconsistent behaviour when creating container from image - unix

I am developing a web application which depends on a moodle system, as it uses moodles webservices. For my automated tests, I wanted to use docker to provide a preconfigured moodle-application on all my machines. Therefore I created a docker image, which I import from a .tar.gz file.
However, creating a new container-instance from this image behaves inconsistently. Sometimes the container boots up correctly and everything works fine. However, sometimes the container starts but the moodle-website is not reachable. If I connect my bash to the container using docker exec -it <container> bash I see that apache is running. The error logs do not show any entries which might be related to this issue.
If I kill the container instance and boot it up again, everything works as expected (sometimes this step has to be repeated multiple times). Do you have any idea what could be the reason for this strange behaviour? Anyone experiencing similar issues?
Docker is running on Ubuntu 14:04. The problem appears on several machines. The script which imports the image and starts the container looks like this:
#!/usr/bin/env bash
docker rm -f moodle
docker load < my-moodle.tar.gz
docker run -d -p 8080:80 -p 8443:443 -p 3306:3306 --name moodle moodle-image
Thanks in advance!

Successful container startup depends on your container entrypoint and external resources (if the entrypoint has external dependencies). What is the entrypoint? Does it depend on external resources?

Related

Using Docker image without Entrypoint to serve R plumber API

I use the geospatial rocker2 image to deploy Rstudio for development and a Shiny app for production. By using a single image, I have a consistent package library, credentials and database connections. I would like to use this same image to serve a plumber API.
Using the standard plumber.R example and the standard plumber Docker example I have tried to serve it as follows:
docker run -v `pwd`/app/plumber.R:/plumber.R --name plumber --restart=unless-stopped \
-p 8000:8000 my_rocker2_fork/geospatial Rscript /plumber.R
Success, kind of. The plumber.R file is clearly being sourced, but it is not being "plumbed":
Another issue is that the container continually restarts (this is the output of docker ps - please ignore the node.js container running):
One more oddity is that port 8000 isn't shown. Sometimes it is, sometimes it isn't. I think this is related to the restarting behaviour.
My code isn't plumbed, because I don't have the Entrypoint that is standard in the rstudio/plumber Dockerfile, and I don't think I want this Entrypoint, as it may cause issues with Rstudio Server and the Shiny app that are also in this image. Therefore, I think it is probably optimal to "plumb" by expanding the Rscript command at the end of my Docker run statement:
docker run -v `pwd`/app/plumber.R:/plumber.R -p 8000:8000 my_rocker2_fork/geospatial \
'Rscript pr("/plumber.R") %>% pr_run(port = 8000)' &
However, this fails because of all the special characters (like the pipe operator). How can I serve plumber code with an arbitrary Dockerfile without an Entrypoint?
The answer is simple! Call a script that sets the plumbing in motion, e.g.
docker run -v `pwd`/app/plumb_start.R:/plumb_start.R -p 8000:8000 my_rocker2_fork/geospatial \
Rscript plumb_start.R
Where plumb_start.R contains:
pr("plumber.R") %>% pr_run(port=8000)
Make sure that you also expose port 8000 in the Dockerfile.

How do I enter my container if dokku enter <app> does nothing?

I have a simple dokku app using herokuish buildpack-php with a procfile: web: vendor/bin/heroku-php-apache2 public/
If I try to enter the app with dokku enter <appname> nothing happens, and I am simply returned to my host shell.
I can run dokku run <appname> bash and get a shell, but is, as far as I understand from the documentation placing me in a new container and not in the existing/running one I need access to:
The run command can be used to run a one-off process for a specific command. This will start a new container and run the desired command within that container.
How can I fix this so I can enter my running container?
If you are in the local directory for your app, doing the following will take you into the running container for it:
dokku enter web.1
I've done this a few times today!
As a workaround, you can enter your container using docker. For doing so, run docker ps and note the container ID. Afterwards, run docker exec -it container_id /bin/bash.

how to share data between docker container and host

I'm working on a read the docs documentation where I use docker. To customize it, I d like to share the css folder between the container and host, in order to avoid building always a new image to see the changes. The goal is, that I can just refresh the browser and see the changes.
I tried something like this, but it doesn't work:
docker run -v ~/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
What is wrong in this command?
The path of the folder I'd like to share is:
Documents/my-documentation/docs/source/_static/css
Thanks for your help!
I'm guessing that the ~ does not resolve correctly. The tilde character ("~") refers to the home directory of your user; usually something like /home/your_username.
In your case, it sounds like your document isn't in this directory anyway.
Try:
docker run -v Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
I have no mac to test with, but I suspect the command should be as below (Documents is a subfolder to inside your home directory denoted by ~)
docker run -v ~/Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
In your OP you mount the host folder ~/docs/source/_static/css, which does not make sense if your files are in Documents/my-documentation/docs/source/_static/css as that would correspond to ~/Documents/my-documentation/docs/source/_static/css
Keep in mind that Docker is still running inside a VM on Mac, so you will need to give a host path that is valid on that VM
What you can do to get a better view of the situation is to start an interactive container where you mount the root file system of the host vm root into /mnt/vm-root. That way you can see what paths are available to mount and how they should be formatted when you pass them using the -v flag to the docker run command
docker run --rm -it -w /mnt/vm-root -v /:/mnt/vm-root ubuntu:latest bash

Is it safe to remove Docker containers listed with `docker ps -f status=created`?

I've already seen posts showing how to remove exited containers listed with docker ps -q -f status=exited, but I also want to clean up 'created' but not 'running' containers. Is it safe to remove containers with the 'created' status, or is there a downside to this?
Docker containers with created status are containers which are created from the images, but never started. Removing them has no impact as you would not have run any process within the container and causing a change in the state of the created container, in the later case requires to be committed. This is generally done to speed up starting the container and making sure all the configuration is kept ready.
Refer Docker Docs
The docker create command creates a writeable container layer over the
specified image and prepares it for running the specified command. The
container ID is then printed to STDOUT. This is similar to docker run
-d except the container is never started. You can then use the docker start command to start the container at any point.
This is useful when you want to set up a container configuration ahead
of time so that it is ready to start when you need it. The initial
status of the new container is created.
There is two possibility for a container to be in the created status :
As explained by #askb docker container created from the image using docker create command will end up in the create command
A docker container created by the run command but not able to start. Multiple causes here but the easiestone is a docker container with a port mapping to an already bind ones
To answer the question, in both cases, removing them is safe.
A way to reproduce the docker container in a created state via the run command is :
docker pull loicmathieu/vsftpd
docker run -p 621:21 -d loicmathieu/vsftpd ftp
docker run -p 621:21 -d loicmathieu/vsftpd ftp
Then docker ps -a will give you something like
CONTAINER ID IMAGE COMMAND CREATED STATUS
e60dcd51e4e2 loicmathieu/vsftpd "/start.sh ftp" 6 seconds ago Created
7041c77cad53 loicmathieu/vsftpd "/start.sh ftp" 16 seconds ago Up 15 seconds

How to mount a directory in the docker container to the host?

It's quite easy to mount a host directory in the docker container.
But I need the other way around.
I use a docker container as a development environment for developing WordPress plugins. This docker container contains everything needed to run WordPress (MySQL, Apache, PHP and WordPress). I mount my plugin src folder from the host in the docker container, so that I can test my plugin during development.
For debugging it would be helpful if my IDE running on the host has read access to the WordPress files in the docker container.
I found two ways to solve the problem but both seem really hacky.
Adding a data volume to the docker container, with the path to the WordPress files
docker run ... -v /usr/share/wordpress/ ...
Docker adds this directory to the path on the host /var/lib/docker/vfs/dir... But you need to look up the actual path with docker inspect and you need root access rights to see the files.
Mounting a host directory to the docker container and copying the WordPress files in the container to that mounted host directory. A symlink doesn't seem to work.
Is there a better way to do that? Without copying files or changing access rights?
Thank you!
Copying the WordPress files to the mounted folder was the solution.
I move the files in the container from the original folder to the mounted folder and use symbolic links to link them back to the original folder.
The important part is, the container can follow symbolic links in the container and but the host can't. So just using symbolic links from the original folder to the mounted folder doesn't work, because the host cannot follow symbolic links in the container!
You can share the files using smb with svendowideits samba container like this:
docker run --rm -v $(which docker):/docker -v /var/run/docker.sock:/docker.sock svendowideit/samba <container name>
It's possible to do if you use volume instead of filesystem path. It's created for you automatically, if it already doesn't exist.
docker run -d -v usr_share_wordpress:/usr/share/wordpress --name your_container ... image
After you stop or remove your container, your volume will be stored on your filesystem with files from container.
You can inspect volume content during lifetime of your_container with busybox image. Something like:
docker run -it --rm --volumes-from your_container busybox sh
After shutdown of your_container you can still check volume with:
docker run -it --rm -v usr_share_wordpress:/usr/share/wordpress busybox sh
List volumes with docker volume ls.
I had a similar need of exposing the files from container to the host. There is an open issue on this as of today. One of the work-arounds mentioned, using binds, is pretty neat; it works when the container is up and running:
container_root=$(docker inspect --format {{.State.Pid}} "$container_name")/root
sudo bindfs --map=root/"$USER" "$container_root/$app_folder" "$host_folder"
PS: I am not sure this is good for production, but it should work in development scenarios!
Why not just do: docker run ... -v /usr/share/wordpress/:/usr/share/wordpress. Now your local /usr/share/wordpress/ is mapped to /usr/share/wordpress in the Docker container and both have the same files. You could also mount elsewhere in the container this way. The syntax is host_path:container_path, so if you wanted to mount /usr/share/wordpress from your host to /my/new/path on the container, you'd just do: docker run ... -v /usr/share/wordpress/:/my/new/path.

Resources