I created some containers on my system (ubuntu 14.04) using docker-compose which mounted directories from the host into the containers.
Now, every time I reboot the host, these directories are recreated, even though the containers do not longer exist (and I therefore deleted the directory).
Example:
I had a container for gitlab-ci in
/var/docker/gitlab-ci/
Containing the files/directories
docker-compose.yml
data/
postegresql/
I have deleted the directory
gitlab-ci/
now,
gitlab-ci/data
gitlab-ci/postgresql
are created after every reboot.
How do I get rid of them?
Those containers might still exists. What does the docker ps -a command shows?
If you can still see your gitlab related containers, remove them with docker rm -f <container name>, then delete the directories and reboot to check if they appear again.
If you still have your docker-compose.yml file, then you could have removed those containers with:
docker-compose stop
docker-compose rm
Related
I'm working on a read the docs documentation where I use docker. To customize it, I d like to share the css folder between the container and host, in order to avoid building always a new image to see the changes. The goal is, that I can just refresh the browser and see the changes.
I tried something like this, but it doesn't work:
docker run -v ~/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
What is wrong in this command?
The path of the folder I'd like to share is:
Documents/my-documentation/docs/source/_static/css
Thanks for your help!
I'm guessing that the ~ does not resolve correctly. The tilde character ("~") refers to the home directory of your user; usually something like /home/your_username.
In your case, it sounds like your document isn't in this directory anyway.
Try:
docker run -v Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
I have no mac to test with, but I suspect the command should be as below (Documents is a subfolder to inside your home directory denoted by ~)
docker run -v ~/Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
In your OP you mount the host folder ~/docs/source/_static/css, which does not make sense if your files are in Documents/my-documentation/docs/source/_static/css as that would correspond to ~/Documents/my-documentation/docs/source/_static/css
Keep in mind that Docker is still running inside a VM on Mac, so you will need to give a host path that is valid on that VM
What you can do to get a better view of the situation is to start an interactive container where you mount the root file system of the host vm root into /mnt/vm-root. That way you can see what paths are available to mount and how they should be formatted when you pass them using the -v flag to the docker run command
docker run --rm -it -w /mnt/vm-root -v /:/mnt/vm-root ubuntu:latest bash
let me clarify the situation:
Run wordpress docker container with:
docker run --name wp -d -p 80:80 wordpress
Login to a running container using bash:
docker exec -it wp /bin/bash
Create 2 dummy files:
One in root:
touch /xxx
One in wp-content/themes
touch /var/www/html/wp-content/themes/xxx
Create a new wordpress image:
docker commit wp new_wp
Kill the original container:
docker kill wp
Run new docker image:
docker run --name new_wp -d -p 80:80 new_wp
Inspect dummy files created in step 3:
Dummy file in root exists
Dummy file in wp-content/themes no longer exists!!!
Questions:
Can anyone explain such a bizare behaviour in step 7?
What am I supposed to do to persist wp-content data?
P.S. I am deploying to AWS ECS Fargate instances therefore using volumes is not very practical for me. Ideally - I would love to have everything under one image without files disappearing from wp-content directory.
Thank you very much for your answers.
The docker image for wordpress includes a VOLUME statement:
VOLUME /var/www/html
This forces a volume to be created on any resulting containers even if you do not specify one in your docker run command. Without a specification, you will get an anonymous volume with a long unique id that can be seen in docker volume ls.
The docker commit command (which I strongly recommend against using in any workflow that you want repeatability) only captures changes to the container filesystem (you can see these with docker container diff). The changes to the volume are not part of the container filesystem, and therefore will not be included in this commit.
To persist data, you should be defining and using a volume, e.g.:
docker run --name wp -v wpdata:/var/www/html -d -p 80:80 wordpress
Docket is inherently non-persistent.
If you want to leverage docker for WP I highly recommend offloading image asset management to S3 and Cloudfront.
So my Dockerfile runs via docker-compose using:
Dockerfile
FROM nginx
#COPY conf
COPY myapp/ /usr/share/nginx/html
RUN chmod -R 664 /usr/share/nginx/html
RUN chown -R nginx /usr/share/nginx/html
RUN chcon -R -t httpd_sys_content_t /usr/share/nginx/html
This is on RHEL 6.x, Docker is old 1.7 or something as well.
I don't even need "run chmod/chown/chcon" for most environments!! The dockerfile works just fine on windows.
However, I still get 403 Forbidden errors whenever nginx tries to access ANY file in /usr/share/nginx/html.
What is the correct way to setup nginx in a docker container and avoid these SElinux problems? (SElinux is on "Enforcing")
In fact, if you do
RUN/CMD ls -l
we can see nginx is the user who owns that folder and it has the right permissions! So what the heck is going on?
Special circumstances related to old Docker 1.7.1 and RHEL6, means you gotta install RHEL7. SELinux does not work well with it. There are some core RHEL6 library issues (shared library permission errors) making it nearly impossible to use with Docker 1.7.1.
The labels are all wrong. the processes inside the image are init_rc_t type labels which are incorrect. The files can be changed to httpd_sys_content_t but it doesn't work.
I think also there may be some nginx:nginx (UID GID mismatching) issues.
But really, it's give up time. Not worth investing time in resolving it and my host provider wouldn't call RHEL6 to ask about it.
I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly
It's quite easy to mount a host directory in the docker container.
But I need the other way around.
I use a docker container as a development environment for developing WordPress plugins. This docker container contains everything needed to run WordPress (MySQL, Apache, PHP and WordPress). I mount my plugin src folder from the host in the docker container, so that I can test my plugin during development.
For debugging it would be helpful if my IDE running on the host has read access to the WordPress files in the docker container.
I found two ways to solve the problem but both seem really hacky.
Adding a data volume to the docker container, with the path to the WordPress files
docker run ... -v /usr/share/wordpress/ ...
Docker adds this directory to the path on the host /var/lib/docker/vfs/dir... But you need to look up the actual path with docker inspect and you need root access rights to see the files.
Mounting a host directory to the docker container and copying the WordPress files in the container to that mounted host directory. A symlink doesn't seem to work.
Is there a better way to do that? Without copying files or changing access rights?
Thank you!
Copying the WordPress files to the mounted folder was the solution.
I move the files in the container from the original folder to the mounted folder and use symbolic links to link them back to the original folder.
The important part is, the container can follow symbolic links in the container and but the host can't. So just using symbolic links from the original folder to the mounted folder doesn't work, because the host cannot follow symbolic links in the container!
You can share the files using smb with svendowideits samba container like this:
docker run --rm -v $(which docker):/docker -v /var/run/docker.sock:/docker.sock svendowideit/samba <container name>
It's possible to do if you use volume instead of filesystem path. It's created for you automatically, if it already doesn't exist.
docker run -d -v usr_share_wordpress:/usr/share/wordpress --name your_container ... image
After you stop or remove your container, your volume will be stored on your filesystem with files from container.
You can inspect volume content during lifetime of your_container with busybox image. Something like:
docker run -it --rm --volumes-from your_container busybox sh
After shutdown of your_container you can still check volume with:
docker run -it --rm -v usr_share_wordpress:/usr/share/wordpress busybox sh
List volumes with docker volume ls.
I had a similar need of exposing the files from container to the host. There is an open issue on this as of today. One of the work-arounds mentioned, using binds, is pretty neat; it works when the container is up and running:
container_root=$(docker inspect --format {{.State.Pid}} "$container_name")/root
sudo bindfs --map=root/"$USER" "$container_root/$app_folder" "$host_folder"
PS: I am not sure this is good for production, but it should work in development scenarios!
Why not just do: docker run ... -v /usr/share/wordpress/:/usr/share/wordpress. Now your local /usr/share/wordpress/ is mapped to /usr/share/wordpress in the Docker container and both have the same files. You could also mount elsewhere in the container this way. The syntax is host_path:container_path, so if you wanted to mount /usr/share/wordpress from your host to /my/new/path on the container, you'd just do: docker run ... -v /usr/share/wordpress/:/my/new/path.