Send a file via SFTP to a Docker Container - networking

I have a Docker container running with an app on Linux. The container is hosted on a Mac(development) or AWS(production). I want to be able to send a file to this container remotely. How can I achieve that?
Thank you.

You need to install a SSH server in the image you are running, or make sure one is already installed. Then you need to map the ssh port (default 22) on your container to the host's port so you can reach your container from outside host. For example:
docker run -p 10022:22 app_container
If running on AWS check your security group for that ec2 instance you are running that container on to allow host port (10022 as in example above) to be accessible from outside.

You may also use "docker cp" to copy from/to container and local drive.
Be aware of the syntax. * is not possible, but cp is recursive and copies directories...
So e.g.
docker cp c867cee9451f:/var/www/html/themes/ .
copies the whole themes folder with subdirectories to your local drive while
docker cp c867cee9451f:/var/www/html/themes/* . #### does not work
won't work.

Related

Is it possible to allow docker use the host machines dnses?

Im using Vagrant with Docker and I currently need to reach the host machines dnses from the docker container or even add my custom dns to docker (then i will add my external ip to dns and effect would be the same I think). Is it even possible? I tried to mount /etc directory via config.yml as below
#directory map
docker_map:
- "/etc:/etc"
and input expecting dns in mounted /etc/hosts file, but it is not working for me (maybe am i doing sth wrong? idk)
I also tried to add host in below field in config.yml
# factory settings...
docker_hosts:
- "127.1.2.3 my-dns"
but again without success.
I also tried to add --dns parameter to DOCKER_OPTS in /etc/default/docker but It's for sth else I guess...
Could you give me any advice? Thank you.
I finally found a solution. It turned out that it ofc very simple.
Just call:
docker ps -a
Copy your container id, then:
docker exec -it <container_id> /bin/bash
now you are in your container's bash and you can install vim and modify /etc/hosts
Thanks to everyone for help

How to expose ports only within the docker network?

I have a few apps running in a Docker network with their ports (3000,4200, etc) exposed. I also have an nginx container running within the same Docker network which hosts these apps on port 80 with different domain names (site1.com, site2.com).
But right now if I go directly to the ports the apps are running on (localhost:3000) I can access them that way too.
How do I expose those ports only to the nginx container and not the host system?
But right now if I go directly to the ports the apps are running on
(localhost:3000) I can access them that way too.
Thats because you are using -p aka --publish command in your docker run
Explanation:
If you want to expose ports between containers only, Do Not use -p or --publish just put them on the same docker network.
Example:
Lets create a new user-defined network:
sudo docker network create appnet
Lets create nginx container for reverse proxy, It should be available to outside world so we use publish.
sudo docker run --name nginx -d --network appnet nginx
Now put your apps in the same network but do not expose ports.
sudo docker run --name app1 -d --network appnet <app image name/id>
You have to use Docker networks.
The default network is shared with host, thus accessible from localhost. You can either configure Docker, creating a network manually, or let tools like docker-compose or Kubernetes to do it for you.

Hostname resolution fails when running docker build from a docker container

We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container.
Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip http://fileserver/xx-1.2.3.zip.
The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP.
The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id>, I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host.
Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).
Add --network=host, so that the build env will use the host machine domain resolution.
docker build --network=host foo/bar:latest .
Docker builds don't happen on the machine issuing the command (your jenkins container, in this case) - they happen on the machine with the Docker Engine. This means that your Jenkins machine tars up the source directory and ships it back to the parent machine for the build to happen. So, check if the curl command works from the parent machine, not the Jenkins container.

Docker EXPOSE vs command line -p option (boot2docker)

After spending way too long trying to access my node server running from a docker container within a boot2docker instance I realised the issue came down to a difference between expose and docker run -p.
In my Dockerfile I had EXPOSE 3001, and I could not access this via my host machine.
After running "docker run -p 3001:3001 myappinst myapp" I was able to access the port.
Up until now I thought "docker run -p 3001:3001" was essentially the same as EXPOSE 3001 in the dockerfile.
I noticed however, when running docker ps
I get the following for "EXPOSE":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16341e2b9968 housemation-crawler:latest "npm start" 2 minutes ago Up 2 minutes 3001/tcp housemation-crawler-inst
(note: 3001/tcp)
vs the below with docker run -p 3001:3001
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b14f736033c housemation-crawler:latest "npm start" 8 seconds ago Up 2 seconds 0.0.0.0:3001->3001/tcp housemation-crawler-inst
(0.0.0.0:3001->3001/tcp)
Looks like the latter is doing some kind of port forwarding, whereas the former is just opening up the port? Would that be right?
If I wanted to access a non forwarded exposed port how would I go about doing so? Also, if I wanted to have port forwarding within the dockerfile, what would be the correct syntax?
Your assumptions about how EXPOSE in Dockerfile and -p option in docker run are right. As you can read in Docker on line documentation:
EXPOSE <port> [<port>...]
The EXPOSE instructions informs Docker that the container will listen
on the specified network ports at runtime. Docker uses this
information to interconnect containers using links (see the Docker
User Guide) and to determine which ports to expose to the host when
using the -P flag. Note: EXPOSE doesn't define which ports can be
exposed to the host or make ports accessible from the host by default.
To expose ports to the host, at runtime, use the -p flag or the -P
flag.
So the EXPOSE instruction in Dockerfile will indicate Docker which ports have to map to host if you run the container with the -P flag; but the local ports mapped are not deterministic and are chosen by Docker at run time. Apart from this, Docker will use the ports in EXPOSE to export information as environmental variables in linked containers.
If you want to set the local port mapped, you have to use the -p option in docker run.

How to mount a directory in the docker container to the host?

It's quite easy to mount a host directory in the docker container.
But I need the other way around.
I use a docker container as a development environment for developing WordPress plugins. This docker container contains everything needed to run WordPress (MySQL, Apache, PHP and WordPress). I mount my plugin src folder from the host in the docker container, so that I can test my plugin during development.
For debugging it would be helpful if my IDE running on the host has read access to the WordPress files in the docker container.
I found two ways to solve the problem but both seem really hacky.
Adding a data volume to the docker container, with the path to the WordPress files
docker run ... -v /usr/share/wordpress/ ...
Docker adds this directory to the path on the host /var/lib/docker/vfs/dir... But you need to look up the actual path with docker inspect and you need root access rights to see the files.
Mounting a host directory to the docker container and copying the WordPress files in the container to that mounted host directory. A symlink doesn't seem to work.
Is there a better way to do that? Without copying files or changing access rights?
Thank you!
Copying the WordPress files to the mounted folder was the solution.
I move the files in the container from the original folder to the mounted folder and use symbolic links to link them back to the original folder.
The important part is, the container can follow symbolic links in the container and but the host can't. So just using symbolic links from the original folder to the mounted folder doesn't work, because the host cannot follow symbolic links in the container!
You can share the files using smb with svendowideits samba container like this:
docker run --rm -v $(which docker):/docker -v /var/run/docker.sock:/docker.sock svendowideit/samba <container name>
It's possible to do if you use volume instead of filesystem path. It's created for you automatically, if it already doesn't exist.
docker run -d -v usr_share_wordpress:/usr/share/wordpress --name your_container ... image
After you stop or remove your container, your volume will be stored on your filesystem with files from container.
You can inspect volume content during lifetime of your_container with busybox image. Something like:
docker run -it --rm --volumes-from your_container busybox sh
After shutdown of your_container you can still check volume with:
docker run -it --rm -v usr_share_wordpress:/usr/share/wordpress busybox sh
List volumes with docker volume ls.
I had a similar need of exposing the files from container to the host. There is an open issue on this as of today. One of the work-arounds mentioned, using binds, is pretty neat; it works when the container is up and running:
container_root=$(docker inspect --format {{.State.Pid}} "$container_name")/root
sudo bindfs --map=root/"$USER" "$container_root/$app_folder" "$host_folder"
PS: I am not sure this is good for production, but it should work in development scenarios!
Why not just do: docker run ... -v /usr/share/wordpress/:/usr/share/wordpress. Now your local /usr/share/wordpress/ is mapped to /usr/share/wordpress in the Docker container and both have the same files. You could also mount elsewhere in the container this way. The syntax is host_path:container_path, so if you wanted to mount /usr/share/wordpress from your host to /my/new/path on the container, you'd just do: docker run ... -v /usr/share/wordpress/:/my/new/path.

Resources