I would like to network with a child docker container from a parent docker container, with a docker-in-docker setup.
Let's say I'm trying to connect to a simple Apache httpd server. When I run the httpd container on my host machine, everything works fine:
asnyder:~$ docker run -d -p 8080:80 httpd:alpine
asnyder:~$ curl localhost:8080
<html><body><h1>It works!</h1></body></html>
But when I do the same from a docker-in-docker setup, I get a Connection refused error:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
I have tried a couple alterations without luck. Specifying the 0.0.0.0 interface:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 0.0.0.0:8080:80 httpd:alpine
/ # curl 0.0.0.0:8080
curl: (7) Failed to connect to 0.0.0.0 port 8080: Connection refused
Using the host network:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d --network host httpd:alpine
/ # curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
Surprisingly, I was unable to find any existing articles on this. Does anyone here have some insight?
Thanks!
There are pros and cons for both DinD and bind mounting the Docker socket and there are certainly use cases for both. As an example, check out this set of blog posts, which does a good job of explaining one of the use cases.
Given your example docker-in-docker setup above, you can access Apache httpd server in one of two ways:
1) From inside the docker:dind container, it will be available on localhost:8080.
2) From inside the docker:latest container, where you were trying to access it originally, it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name mydind, therefore curl mydind:8080 would give you the standard Apache <html><body><h1>It works!</h1></body></html>.
Hope it makes sense!
Building upon Yuriy's answer:
2) From inside the docker:latest container, [...] it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name mydind, therefore curl mydind:8080 [...]
In the Gitlab CI config, you can address the DinD container by the name of its image (in addition to the name of its container, which is auto-generated):
Accessing the services
Let’s say that you need a Wordpress instance to test some API integration with your application.
You can then use for example the tutum/wordpress image in your .gitlab-ci.yml:
services:
- tutum/wordpress:latest
If you don’t specify a service alias, when the job is run, tutum/wordpress will be started and you will have access to it from your build container under two hostnames to choose from:
tutum-wordpress
tutum__wordpress
Using
service:
- docker:dind
will allow you to access that container as docker:8080:
script:
- docker run -d -p 8080:80 httpd:alpine
- curl docker:8080
Edit: If you'd prefer a more explicit host name, you can, as the documentation states, use an alias:
services:
- name: docker:dind
alias: dind-service
and then
script:
- docker run -d -p 8080:80 httpd:alpine
- curl dind-service:8080
Hth,
dtk
I am very convinced that #Yuriy Znatokov's answer is what I want, but I have understood it for a long time. In order to make it easier for later people to understand, I have exported the complete steps.
1) From inside the docker:dind container
docker run -d --name mydind --privileged docker:dind
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl localhost:8080
<html><body><h1>It works!</h1></body></html>
2) From inside the docker:latest container
docker run -d --name mydind --privileged docker:dind
docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl mydind:8080
<html><body><h1>It works!</h1></body></html>
Related
I would like to know if it is possible to provide a local development environment for Wordpress, where the data can be uploaded via FTP?
I already got the official Docker image of Wordpress running:
docker run --name wordpress-db -e MYSQL_ROOT_PASSWORD=admin -p 3306:3306 -d mysql
docker run --name wordpress --link wordpress-db:mysql -p 8080:80 -d wordpress
Next, I'm trying to use an FTP Docker image. After a short search, I chose metabrainz/docker-anon-ftp. But I do not have to continue using this image if there are better ones. I have extended my commands as follows:
docker run --name wordpress --link wordpress-db:mysql -p 8080:80 -v /wordpress-volume:/var/www/html/wp-content -d wordpress
docker run --name wordpress-ftp -p 20-21:20-21 -p 65500-65515:65500-65515 -v /wordpress-volume:/var/www/html/wp-content -d metabrainz/docker-anon-ftp
Now I have the problem that the FTP server is reachable, but does not provide me with any content.
What did I do wrong? Can someone give me the commands with which I can upload files (from my IDE) into Wordpress via FTP?
I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>
The challenge
As described, I want to accomplish the same goal with docker itself as I would with the help of docker-compose.
I want to get a deeper understanding of docker and enable the ability to work with docker on platforms, where docker-compose is not an option.
What I do currently (with docker-compose)
1)
I use this docker-compose file:
---
version: '3'
services:
app:
build: .
proxy:
build: docker/proxy
ports:
- "80:80"
The "app" service starts a container which runs node on port 3002 (is exposed in the dockerfile)
The "proxy" service starts a container which runs an nginx with - among others - the following conf:
server {
listen 80;
server_name app;
location / {
proxy_pass http://app:3002;
}
}
2)
Then I add this to the /etc/hosts of my host pc:
127.0.0.1 app
3)
Now I run docker-compose up and vist http://app , which hits the node app.
Nice and simple, right?
Now I want to do the same only with docker.
What I've tried
1 using the same nginx configuration.
2 Starting the containers with a bash script
To accomplish this I
Created a network
Add the network to both containers
Setting up "app"-container hostname, network-alias and dns-search to "app" (because I hoped one of the options would help)
Here the script:
docker network create --driver bridge dockertest_nw
docker build -t dockertest_app .
docker create \
--name dockertest_app_con \
--network dockertest_nw \
--hostname app \
--network-alias=app \
--dns-search=app \
dockertest_app
docker build -t dockertest_proxy ./docker/proxy/
docker create \
--name dockertest_proxy_con \
--network dockertest_nw \
--hostname proxy \
--network-alias=proxy \
--dns-search=proxy \
-p 80:80 \
dockertest_proxy
docker start dockertest_proxy_con
docker start dockertest_app_con
Unfortunately, this doesn't work.
I also know there is a dns service from docker which docker-compose somehow uses and I should also use it on some way?
Could any one give some suggestions?
Update:
Just the info I got the following logs from the nginx container, which i would say shows the nginx doesn't can resolve "app" :
172.18.0.1 - - [13/Apr/2017:14:49:06 +0000] "GET / HTTP/1.1" 502 576 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" "-"
2017/04/13 14:49:06 [error] 5#5: *13 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: app, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: "app"
You're tripping yourself up with all those options. All you really need is --network-alias to set the short form names app and proxy in your containers, which will be available in addition to the container names dockertest_app and dockertest_proxy.
docker network create --driver bridge dockertest_nw
docker build -t dockertest_app .
docker create \
--name dockertest_app \
--network dockertest_nw \
--network-alias=app \
dockertest_app
docker build -t dockertest_proxy ./docker/proxy/
docker create \
--name dockertest_proxy \
--network dockertest_nw \
--network-alias=proxy \
-p 80:80 \
dockertest_proxy
docker start dockertest_proxy
docker start dockertest_app
Hello fellows I have made a custom wordpress image located there: https://github.com/ellakcy/wordpressWithPlugins
And on entrypoint script I am using wp-cli in order to generate a custom user in order to preinstall plugins. But I cannot login to the control panel with the generated user from wp-cli.
Do you have any Idea how to fix it?
The entrypoint of the script is the following: https://github.com/ellakcy/wordpressWithPlugins/blob/master/docker-entrypoint.sh
I run the containers with these commands: (for development purpose)
docker run --name wpdb -e MYSQL_ROOT_PASSWORD=1234 -d mariadb
docker run --name mywordpress --link wpdb:mysql -p 8080:80 -ti wp
And I am using apache as reverse proxy in order to access the wordpress running in the mywordpress container:
<VirtualHost *:80>
ProxyPass / http://172.17.0.3/
ProxyPassReverse http://172.17.0.3/ /
</Virtualhost>
(In place of 172.17.0.3 can be the ip of the container running the wordpress)
Edit 1
I managed to login by setting up a network:
docker network create --subnet="172.19.0.0/16" wordpress_default
And setting the custom ips to the coontainers. (Also I set some Enviromental variables too.)
RUN MYSQL/MARIADB
docker run --name wpdb --net wordpress_default --ip 172.19.0.2 -e MYSQL_ROOT_PASSWORD=1234 -d mariadb
run wordpress docker with some extra enviiromental variables
docker run --name mywordpress --net wordpress_default --ip 172.19.0.3 --link wpdb:mysql -e WORDPRESS_ADMIN_PASSWORD=1234 -e WORDPRESS_ADMIN_EMAIL=pc_magas#openmailbox.org -e WORDPRESS_URL=172.19.0.3 -p 8080:80 -ti wp
And visiting the wordpress site via the ip given oon the second coommand. But I still have problems with the local apache running as reverse proxy.
In the end just manually setting the machine's ip as url works like a charm.
docker run --name wpdb --net wordpress_default --ip 172.19.0.2 -e MYSQL_ROOT_PASSWORD=1234 -d mariadb
run wordpress docker with some extra enviiromental variables
docker run --name mywordpress --net wordpress_default --ip 172.19.0.3 --link wpdb:mysql -e WORDPRESS_ADMIN_PASSWORD=1234 -e WORDPRESS_ADMIN_EMAIL=pc_magas#openmailbox.org -e WORDPRESS_URL=172.19.0.3 -p 8080:80 -ti wp
All I had to do wat to set the following vhost to my apache:
<VirtualHost *:80>
RequestHeader set X-Forwarded-Proto "http"
ProxyPass / http://172.19.0.3/
ProxyPassReverse http://172.19.0.3/ /
</Virtualhost>
(Perhaps for production may need some changes)
I am trying to host a simple static site using the Docker Nginx Image from Dockerhub: https://registry.hub.docker.com/_/nginx/
A note on my setup, I am using boot2docker on OSX.
I have followed the instructions and even I cannot connect to the running container:
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker build -t wargames-front-end .
Sending build context to Docker daemon 813.6 kB
Sending build context to Docker daemon
Step 0 : FROM nginx
---> 42a3cf88f3f0
Step 1 : COPY app /usr/share/nginx/html
---> Using cache
---> 61402e6eb300
Successfully built 61402e6eb300
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker run --name wargames-front-end -d -p 8080:8080 wargames-front-end
9f7daa48a25bdc09e4398fed5d846dd0eb4ee234bcfe89744268bee3e5706e54
MacBook-Pro:LifeIT-war-games-frontend ryan$ curl localhost:8080
curl: (52) Empty reply from server
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f7daa48a25b wargames-front-end:latest "nginx -g 'daemon of 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp wargames-front-end
Instead of localhost, use boot2docker ip. First do boot2docker ip and use that ip:
<your-b2d-ip>:8080. Also you need to make sure you forwarded your port 8080 in VirtualBox for boot2docker.
Here is the way to connect nginx docker container service:
docker ps # confirm nginx is running, which you have done.
docker port wargames-front-end # get the ports, for example: 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp
boot2docker ip # get the IP address, for example: 192.168.59.103
So now, you should be fine to connect to:
http://192.168.59.103:8080
https://192.168.59.103:8080
Here's how I got it to work.
docker kill wargames-front-end
docker rm wargames-front-end
docker run --name wargames-front-end -d -p 8080:80 wargames-front-end
Then I went to my virtualbox and setup these settings: