Is it possible to allow docker use the host machines dnses? - networking

Im using Vagrant with Docker and I currently need to reach the host machines dnses from the docker container or even add my custom dns to docker (then i will add my external ip to dns and effect would be the same I think). Is it even possible? I tried to mount /etc directory via config.yml as below
#directory map
docker_map:
- "/etc:/etc"
and input expecting dns in mounted /etc/hosts file, but it is not working for me (maybe am i doing sth wrong? idk)
I also tried to add host in below field in config.yml
# factory settings...
docker_hosts:
- "127.1.2.3 my-dns"
but again without success.
I also tried to add --dns parameter to DOCKER_OPTS in /etc/default/docker but It's for sth else I guess...
Could you give me any advice? Thank you.

I finally found a solution. It turned out that it ofc very simple.
Just call:
docker ps -a
Copy your container id, then:
docker exec -it <container_id> /bin/bash
now you are in your container's bash and you can install vim and modify /etc/hosts
Thanks to everyone for help

Related

how to share data between docker container and host

I'm working on a read the docs documentation where I use docker. To customize it, I d like to share the css folder between the container and host, in order to avoid building always a new image to see the changes. The goal is, that I can just refresh the browser and see the changes.
I tried something like this, but it doesn't work:
docker run -v ~/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
What is wrong in this command?
The path of the folder I'd like to share is:
Documents/my-documentation/docs/source/_static/css
Thanks for your help!
I'm guessing that the ~ does not resolve correctly. The tilde character ("~") refers to the home directory of your user; usually something like /home/your_username.
In your case, it sounds like your document isn't in this directory anyway.
Try:
docker run -v Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
I have no mac to test with, but I suspect the command should be as below (Documents is a subfolder to inside your home directory denoted by ~)
docker run -v ~/Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
In your OP you mount the host folder ~/docs/source/_static/css, which does not make sense if your files are in Documents/my-documentation/docs/source/_static/css as that would correspond to ~/Documents/my-documentation/docs/source/_static/css
Keep in mind that Docker is still running inside a VM on Mac, so you will need to give a host path that is valid on that VM
What you can do to get a better view of the situation is to start an interactive container where you mount the root file system of the host vm root into /mnt/vm-root. That way you can see what paths are available to mount and how they should be formatted when you pass them using the -v flag to the docker run command
docker run --rm -it -w /mnt/vm-root -v /:/mnt/vm-root ubuntu:latest bash

Docker WordPress image does not persist wp-content when creating new docker images

let me clarify the situation:
Run wordpress docker container with:
docker run --name wp -d -p 80:80 wordpress
Login to a running container using bash:
docker exec -it wp /bin/bash
Create 2 dummy files:
One in root:
touch /xxx
One in wp-content/themes
touch /var/www/html/wp-content/themes/xxx
Create a new wordpress image:
docker commit wp new_wp
Kill the original container:
docker kill wp
Run new docker image:
docker run --name new_wp -d -p 80:80 new_wp
Inspect dummy files created in step 3:
Dummy file in root exists
Dummy file in wp-content/themes no longer exists!!!
Questions:
Can anyone explain such a bizare behaviour in step 7?
What am I supposed to do to persist wp-content data?
P.S. I am deploying to AWS ECS Fargate instances therefore using volumes is not very practical for me. Ideally - I would love to have everything under one image without files disappearing from wp-content directory.
Thank you very much for your answers.
The docker image for wordpress includes a VOLUME statement:
VOLUME /var/www/html
This forces a volume to be created on any resulting containers even if you do not specify one in your docker run command. Without a specification, you will get an anonymous volume with a long unique id that can be seen in docker volume ls.
The docker commit command (which I strongly recommend against using in any workflow that you want repeatability) only captures changes to the container filesystem (you can see these with docker container diff). The changes to the volume are not part of the container filesystem, and therefore will not be included in this commit.
To persist data, you should be defining and using a volume, e.g.:
docker run --name wp -v wpdata:/var/www/html -d -p 80:80 wordpress
Docket is inherently non-persistent.
If you want to leverage docker for WP I highly recommend offloading image asset management to S3 and Cloudfront.

Send a file via SFTP to a Docker Container

I have a Docker container running with an app on Linux. The container is hosted on a Mac(development) or AWS(production). I want to be able to send a file to this container remotely. How can I achieve that?
Thank you.
You need to install a SSH server in the image you are running, or make sure one is already installed. Then you need to map the ssh port (default 22) on your container to the host's port so you can reach your container from outside host. For example:
docker run -p 10022:22 app_container
If running on AWS check your security group for that ec2 instance you are running that container on to allow host port (10022 as in example above) to be accessible from outside.
You may also use "docker cp" to copy from/to container and local drive.
Be aware of the syntax. * is not possible, but cp is recursive and copies directories...
So e.g.
docker cp c867cee9451f:/var/www/html/themes/ .
copies the whole themes folder with subdirectories to your local drive while
docker cp c867cee9451f:/var/www/html/themes/* . #### does not work
won't work.

How to mount a directory in the docker container to the host?

It's quite easy to mount a host directory in the docker container.
But I need the other way around.
I use a docker container as a development environment for developing WordPress plugins. This docker container contains everything needed to run WordPress (MySQL, Apache, PHP and WordPress). I mount my plugin src folder from the host in the docker container, so that I can test my plugin during development.
For debugging it would be helpful if my IDE running on the host has read access to the WordPress files in the docker container.
I found two ways to solve the problem but both seem really hacky.
Adding a data volume to the docker container, with the path to the WordPress files
docker run ... -v /usr/share/wordpress/ ...
Docker adds this directory to the path on the host /var/lib/docker/vfs/dir... But you need to look up the actual path with docker inspect and you need root access rights to see the files.
Mounting a host directory to the docker container and copying the WordPress files in the container to that mounted host directory. A symlink doesn't seem to work.
Is there a better way to do that? Without copying files or changing access rights?
Thank you!
Copying the WordPress files to the mounted folder was the solution.
I move the files in the container from the original folder to the mounted folder and use symbolic links to link them back to the original folder.
The important part is, the container can follow symbolic links in the container and but the host can't. So just using symbolic links from the original folder to the mounted folder doesn't work, because the host cannot follow symbolic links in the container!
You can share the files using smb with svendowideits samba container like this:
docker run --rm -v $(which docker):/docker -v /var/run/docker.sock:/docker.sock svendowideit/samba <container name>
It's possible to do if you use volume instead of filesystem path. It's created for you automatically, if it already doesn't exist.
docker run -d -v usr_share_wordpress:/usr/share/wordpress --name your_container ... image
After you stop or remove your container, your volume will be stored on your filesystem with files from container.
You can inspect volume content during lifetime of your_container with busybox image. Something like:
docker run -it --rm --volumes-from your_container busybox sh
After shutdown of your_container you can still check volume with:
docker run -it --rm -v usr_share_wordpress:/usr/share/wordpress busybox sh
List volumes with docker volume ls.
I had a similar need of exposing the files from container to the host. There is an open issue on this as of today. One of the work-arounds mentioned, using binds, is pretty neat; it works when the container is up and running:
container_root=$(docker inspect --format {{.State.Pid}} "$container_name")/root
sudo bindfs --map=root/"$USER" "$container_root/$app_folder" "$host_folder"
PS: I am not sure this is good for production, but it should work in development scenarios!
Why not just do: docker run ... -v /usr/share/wordpress/:/usr/share/wordpress. Now your local /usr/share/wordpress/ is mapped to /usr/share/wordpress in the Docker container and both have the same files. You could also mount elsewhere in the container this way. The syntax is host_path:container_path, so if you wanted to mount /usr/share/wordpress from your host to /my/new/path on the container, you'd just do: docker run ... -v /usr/share/wordpress/:/my/new/path.

wget: unable to resolve host address `http'

I am getting this strange thing on my Ubuntu 12.04 64-bit machine when I do a wget
$ wget google.com
--2014-07-18 14:44:32-- http://google.com/
Resolving http (http)... failed: Name or service not known.
wget: unable to resolve host address `http'
I have encountered this problem earlier when I got it for any web pages (and not http), which required me to add my nameserver to /etc/resolv.conf.
However, here that doesn't seem to be the problem, instead it is recognizing http as something different. Any advise?
The DNS server seems out of order. You can use another DNS server such as 8.8.8.8. Put nameserver 8.8.8.8 to the first line of /etc/resolv.conf.
remove the http or https from wget https:github.com/facebook/facebook-php-sdk/archive/master.zip . this worked fine for me.
I have this issue too. I suspect there is an issue with DigitalOcean’s nameservers, so this will likely affect a lot of other people too.
Here’s what I’ve done to temporarily get around it - but someone else might be able to advise on a better long-term fix:
Make sure your DNS Resolver config file is writable:
sudo chmod o+r /etc/resolv.conf
Temporarily change your DNS to use Google’s nameservers instead of DigitalOcean’s:
sudo nano /etc/resolv.conf
Change the IP address in the file to: 8.8.8.8
Press CTRL + X to save the file.
This is only a temporary fix as this file is automatically written/updated by the server, however, I’ve not yet worked out what writes to it so that I can update it permanently.
I figured out what went wrong. In the proxy configuration of my box, an extra http:// got prefixed to "proxy server with http".
Example..
http://http://proxy.mycollege.com
and that has created problems. Corrected that, and it works perfectly.
Thanks #WhiteCoffee and #ChrisBint for your suggestions!
If using Vagrant try reloading your box. This solved my issue.
It might happen because of the overriding of resolv.conf, This answer helped me, use below every time when you set up a new WSL. sudo chattr +i /etc/resolv.conf - will make the file immutable and won't be overwritten next time you start wsl.
sudo bash -c 'echo -e "[network]
generateResolvConf = false" > /etc/wsl.conf
rm /etc/resolv.conf
echo -e "options timeout:1 attempts:1 rotate
nameserver 1.1.1.1
nameserver 1.0.0.1" > /etc/resolv.conf
chattr -f +i /etc/resolv.conf'

Resources