I have a 5 node Riak cluster running. I ssh to node 1 and run 'riak-admin test' the output of which is "Node is not running!"..however the REST API responds (eg http://{localhst}:8098/stats returns JSON stats as expected) and I can run a client that hits the ProtoBuf endpoint ok too. I must be making a noob mistake but what? (yes, have tried sudo riak-admin test)
I'm running Riak in a docker container on Debian Jessie host and have established ssh session via docker exec -i -t [container name} bash. I have hit the HTTP endpoint with curl from the session.
This, as you might expect, turns out to be environmental. I have my five nodes running in five docker containers as per http://basho.com/riak-quick-start-with-docker/
Each time a container is recycled during a session of the host it is assigned the next ip address. The riak instance in the container has it's address statically configured, hence if I recycle a container the actual IP and the static IP for riak do not match.
I've also encountered this when the hostname doesn't contain a ".", which is the case with docker's default hostnames. I always have to start my riak containers with docker run --hostname riakN.docker.
Related
I am attempting to run my ASP.NET Core 1.1 web API in a Docker container, but I cannot connect to the web API from a browser or curl. To troubleshoot, I have also brought up standard nginx and Apache httpd containers and cannot connect to these either, so I believe this is a Docker/Docker Toolbox/configuration issue rather than a problem with my application.
I'll focus on what I have done with nginx and Apache:
I am running Docker Toolbox on Windows 7 Professional, and everything seems to work as I would expect.
Docker commands all work as expected
I can access the underlying Windows filesystem
I can get the expected results from curl http://localhost (if I start the default IIS website on Windows 7)
So now I shut down IIS and run nginx in a container:
$ docker run -d -p 80:80 nginx
45bb1f373c11b820d8431de3eb3bf222d57d412de53e8625f461b62c4279e644
Docker now shows nginx running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45bb1f373c11 nginx "nginx -g 'daemon off" 47 seconds ago Up 48 seconds 0.0.0.0:80->80/tcp, 443/tcp admiring_pike
But I cannot connect with either curl (within the Docker Toolbox command prompt) or a web browser in Windows:
$ curl http://localhost
curl: (7) Failed to connect to localhost port 80: Connection refused
I get exactly the same results if I run an Apache 2.4 (httpd) container.
Any ideas? Thanks! Peter
I have found the answer in another question here.
Because Docker Toolbox is running on a lightweight Linux VM, it has its own IP address. One needs either to map localhost to the VM using DOCKER_HOST ir access the VM via it's IP address, found using the command:
docker-machine ip default
As you are running on VM, you need to follow this docker document from here.
After that run the following command to check the IP address of your VM.
docker-machine ip default
Start the nginx and hit [ip default address]:port in the browser. It works!
I have installed docker engine v1.12.3 on Ubuntu 14.04 LTS and since after the following changes to enable Remote API, I'm not able to pull or run any of the docker images,
Added DOCKER_OPTS="-H tcp://127.0.0.1:2375" in /etc/default/docker.
/etc/init.d/docker start.
Following is the error received,
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Note: I have added login in user to the docker group
If you configure the docker daemon to listen to a TCP socket (as you do), you should use the -H command line option with the docker command to point it to that socket instead of the default Unix socket.
#mustaccio is correct. The docker command defaults to using a unix socket normally at /var/run/docker.sock. You can either make your options setup like:
DOCKER_OPTS="-H tcp://127.0.0.1:2375" -H unix:///var/run/docker.sock" and restart, or always use docker -H tcp://127.0.0.1:2375 whenever you interact with the host from the command line.
The only good scenario I've seen for removing the socket is pure user security. If your Docker host is TLS enabled, you can ensure only authorized people are accessing the host by signed certificates, not just people with access to the system.
I can't get my ASP.NET web application to get served to my browser when the web app is containerized in Docker.
I'm running a Mac, and I've used Visual Studio Code to create an ASP.NET web application. It's a simple, out-of-the-box demo that is based on the yo aspnet "Empty Application." When run "native" (outside of Docker), this application serves a "Hello World!" to http://localhost:5000 just fine. In other words, running dnx web starts the web server (Kestrel) and yeilds:
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
This is good. Now enter Docker. I seem to have successfully built a Docker image containing the web application, and when I run the container in Docker, I get the same output from Kestrel. Also good, but, I can no longer load the "Hello World!" page in my browser at http://localhost:5000. Instead, I get a ERR_CONNECTION_REFUSED. This is fairly obviously because due to the Docker "indirection," there is nothing serving directly to port 5000 anymore. In other words, I think there's an incorrect forwarding configuration, or, I think am misunderstanding the addressing.
I believe that port forwarding is involved in this process. In my Dockerfile, I am using an EXPOSE 5000 which I thought would allow me to map my local use of port 5000 to the Docker container's port 5000 using a run command like this:
docker run -i -t -p 5000:5000 container_name
But that's not the case with http://localhost:5000 (ERR_CONNECTION_REFUSED). So it occurred to me that Docker is almost certainly not at localhost. I had noticed when Docker loads, it says:
docker is configured to use the default machine with IP 192.168.99.100
So, I thought I'd try http://192.168.99.100:5000, but again (confusingly?) ERR_CONNECTION_REFUSED. Next, I read an interesting article here and I was able to determine from the suggested command
docker inspect container_name | grep IPAddress
That the container is assigned "IPAddress": "172.17.0.2"
So, I thought I'd try http://172.17.0.2:5000. And now we might actually be getting somewhere, because instead of a ERR_CONNECTION_REFUSED, I instead get a spinning hourglass and a resulting timeout. But still no "Hello World!"
What might I be missing?
It turns out that the web application is available at the IP address of the virtual machine 192.168.99.100 as suspected. 172.17.0.2 was clearly some sort of red herring.
The real kicker seems to be that the container's default "internal" IP is 0.0.0.0
Following the excellent advice of this posting, I edited the Dockerfile and specified the following:
ENTRYPOINT ["dnx", "web", "--server.urls", "http://0.0.0.0:5000"]
Because...
This will allow our web application to serve requests that come in from the
port forwarding provided by Docker which defaults to 0.0.0.0
The port mapping is crucial to link the host's port to the container's, but the EXPOSE command is apparently redundant. Now, when I run
docker run -i -t -p 80:5000 container_name
I can simply browse to http://192.168.99.100 (port 80 is implicit)
And viola! There's my "Hello World!"
Apart from using http://0.0.0.0:5000 you can use http://*.5000
ENTRYPOINT ["dnx", "web", "--server.urls", "http://*:5000"]
or you can include this against the runtimes environment
"commands": {
"kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
},
"web": ......
and the entrypoint in the dockerfile can be
ENTRYPOINT ["dnx","-p","project.json","kestrel"]
I have a running nginx container: # docker run --name mynginx1 -P -d nginx;
And got its PORT info by docker ps: 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp
Then I could get response from within the container(id: c30991a04b2f):
docker exec -i -t c3099 bash
curl http://localhost => which return the default index.html page content, it works
However, when I make the curl http://localhost:32769 outside of the container, I got this:
curl: (7) failed to connect to localhost port 32769: Connection refused
I am running on a mac with docker version 1.9.0; nginx latest
Does anyone know what cause this? Any help? thank you
If you are On OSX, you are probably using a VirtualBox VM for your docker environment.
Make sure you have forwarded your port 32769 to your actual host (the mac), in order for that port to be visible from localhost.
This is valid for the old boot2docker, or the new docker machine.
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port32769 ,tcp,,32769,,32769"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port32769 ,udp,,32769,,$32769
(controlvm if the VM is running, modifyvm is the VM is stopped)
(replace "boot2docker-vm" b ythe name of your vm: see docker-machine ls)
I would recommend to not use -P, but a static port mapping -p xxx:80 -p yyy:443.
That way, you can do that port forwarding once, using fixed values.
Of course, you can access the VM directly through docker-machine ip vmname
curl http://$(docker-machine ip vmname):32769
Solved.. I misunderstood how docker port mapping works.
Since I'm using mac, the host for nginx container is a VM, 0.0.0.0:32769->80/tcp maps the port 80 of the container to the port 32769 of the VM.
solution:
docker-machine ip vm-name => 192.168.99.xx
curl http://192.168.99.xx:32769
Not exactly answers for your question but spend some time trying to figure out similar thing in context of "why is my docker container not connecting to elastic search localhost:9200" and this was the first S.O. question that pops up, so I hope it helps some other googling person
if you are linking containers together (e.g. docker run --rm --name web2 --link db:db training/webapp env)
... then Dockers adds enviroment variables:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
... and also updates your /etc/hosts
# /etc/hosts
#...
172.17.0.9 db
so you can technically connect to ping db
https://docs.docker.com/v1.8/userguide/dockerlinks/
so for elastic search is
# /etc/hosts
# ...
172.17.0.28 elasticsearch f9db83d0dfb5 ecs-awseb-qa-3Pobblecom-env-f7yq6jhmpm-10-elasticsearch-fcbfe5e2b685d0984a00
so wget elasticseach:9200 will work
I plan to split my monolthic server up into many small docker containers but haven't found a good solution for "inter-container communication" yet. This is my target scenario:
I know how to link containers together and how to expose ports, but none of these solutions are satisfying to me.
Is there any solution to communicate via hostnames (container names) between the containers like in a traditional server network?
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
Edit: After Docker 1.9, the docker network command (see below https://stackoverflow.com/a/35184695/977939) is the recommended way to achieve this.
My solution is to set up a dnsmasq on the host to have DNS record automatically updated: "A" records have the names of containers and point to the IP addresses of the containers automatically (every 10 sec). The automatic updating script is pasted here:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Make sure your dnsmasq service is available on docker0. Then, start your container with --dns HOST_ADDRESS to use this mini dns service.
Reference: http://docs.blowb.org/setup-host/dnsmasq.html
That should be what --link is for, at least for the hostname part.
With docker 1.10, and PR 19242, that would be:
docker network create --net-alias=[]: Add network-scoped alias for the container
(see last section below)
That is what Updating the /etc/hosts file details
In addition to the environment variables, Docker adds a host entry for the source container to the /etc/hosts file.
For instance, launch an LDAP server:
docker run -t --name openldap -d -p 389:389 larrycai/openldap
And define an image to test that LDAP server:
FROM ubuntu
RUN apt-get -y install ldap-utils
RUN touch /root/.bash_aliases
RUN echo "alias lds='ldapsearch -H ldap://internalopenldap -LL -b
ou=Users,dc=openstack,dc=org -D cn=admin,dc=openstack,dc=org -w
password'" > /root/.bash_aliases
ENTRYPOINT bash
You can expose the 'openldap' container as 'internalopenldap' within the test image with --link:
docker run -it --rm --name ldp --link openldap:internalopenldap ldaptest
Then, if you type 'lds', that alias will work:
ldapsearch -H ldap://internalopenldap ...
That would return people. Meaning internalopenldap is correctly reached from the ldaptest image.
Of course, docker 1.7 will add libnetwork, which provides a native Go implementation for connecting containers. See the blog post.
It introduced a more complete architecture, with the Container Network Model (CNM)
That will Update the Docker CLI with new “network” commands, and document how the “-net” flag is used to assign containers to networks.
docker 1.10 has a new section Network-scoped alias, now officially documented in network connect:
While links provide private name resolution that is localized within a container, the network-scoped alias provides a way for a container to be discovered by an alternate name by any other container within the scope of a particular network.
Unlike the link alias, which is defined by the consumer of a service, the network-scoped alias is defined by the container that is offering the service to the network.
Continuing with the above example, create another container in isolated_nw with a network alias.
$ docker run --net=isolated_nw -itd --name=container6 -alias app busybox
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
--alias=[]
Add network-scoped alias for the container
You can use --link option to link another container with a preferred alias
You can pause, restart, and stop containers that are connected to a network. Paused containers remain connected and can be revealed by a network inspect. When the container is stopped, it does not appear on the network until you restart it.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
$ docker network connect --link container1:c1 multi-host-network container2
EDIT : It is not bleeding edge anymore : http://blog.docker.com/2016/02/docker-1-10/
Original Answer
I battled with it the whole night.
If you're not afraid of bleeding edge, the latest version of Docker engine and Docker compose both implement libnetwork.
With the right config file (that need to be put in version 2), you will create services that will all see each other. And, bonus, you can scale them with docker-compose as well (you can scale any service you want that doesn't bind port on the host)
Here is an example file
version: "2"
services:
router:
build: services/router/
ports:
- "8080:8080"
auth:
build: services/auth/
todo:
build: services/todo/
data:
build: services/data/
And the reference for this new version of compose file:
https://github.com/docker/compose/blob/1.6.0-rc1/docs/networking.md
As far as I know, by using only Docker this is not possible. You need some DNS to map container ip:s to hostnames.
If you want out of the box solution. One solution is to use for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks for each service and every service can be reached by service_name.kontena.local-address.
Here is simple example of Wordpress application's YAML file where Wordpress service connects to MySQL server with wordpress-mysql.kontena.local address:
wordpress:
image: wordpress:4.1
stateful: true
ports:
- 80:80
links:
- mysql:wordpress-mysql
environment:
- WORDPRESS_DB_HOST=wordpress-mysql.kontena.local
- WORDPRESS_DB_PASSWORD=secret
mysql:
image: mariadb:5.5
stateful: true
environment:
- MYSQL_ROOT_PASSWORD=secret