I am reading this official Docker 0.10.3 documentation (at this time, it is still in a branch) and it says:
--net-alias=ALIAS
In addition to --name as described above, a container is discovered by one or more of its configured --net-alias (or --alias in docker network connect command) within the user-defined network. The embedded DNS server maintains the mapping between all of the container aliases and its IP address on a specific user-defined network. A container can have different aliases in different networks by using the --alias option in docker network connect command.
--link=CONTAINER_NAME:ALIAS
Using this option as you run a container gives the embedded DNS an extra entry named ALIAS that points to the IP address of the container identified by CONTAINER_NAME. When using --link the embedded DNS will guarantee that localized lookup result only on that container where the --link is used. This lets processes inside the new container connect to container without without having to know its name or IP.
Does network alias from one container actually is a link from the second container in the same network?
There are two differences between --net-alias and --link:
With --net-alias, one container can access the other container only if they are on the same network. In other words, in addition to --net-alias foo and --net-alias bar, you need to start both containers with --net foobar_net after creating the network with docker network create foobar_net.
With --net-alias foo, all containers in the same network can reach the container by using its alias foo. With --link, only the linked container can reach the container by using the name foo.
Historically, --link was created before libnetwork and all network-related features. Before libnetwork, all containers ran in the same network bridge, and --link only added names to /etc/hosts. Then, custom networks were added and the behavior of --link in user-defined networks was changed.
See also Legacy container links for more information on --link.
Related
Is it possible to have a docker container running locally to not use the hosts /etc/hosts file for dns resolution?
For instance, on my local /etc/hosts file I could set 127.0.0.1 stackoverflow.com, but within my docker container stackoverflow.com would resolve to the actual ip.
You’ll find the network configuration inside the container is identical to the host only when running container with option --network=host.
If you run container without --network option, the Docker daemon connects containers to default bridge network. Containers in this default network are able to communicate with each other using IP addresses.
List docker networks with
docker network ls
and you can inspect them by
docker network inspect bridge (last parameter is network name)
Look at Docker container networking for more details.
I had the same problem and solved it.
By default, a container inherits the DNS settings of the host, as defined in the /etc/resolv.conf configuration file. Containers that use the default bridge network get a copy of this file
Solution:
docker run --dns <your dns address, different from the host's ip>
reference:Docker container networking
I have an environment where I need to run some external software into Docker containers. This software is trying to connect to our product by specific IP address - let's say 192.168.255.2 - and this address is fixed and cannot be changed. Moreover, host IP address must be also set to specific IP - let's say 192.168.255.3.
Product supports 2 ethernet interfaces:
first of them has strict restrictions regarding IP addressing - let's call it "first"
second does not have such restrictions and provides similar functionalities - for this example let's assume that the IP address of this interface is set to 10.1.1.2/24 - let's call it "second"
I need to run simultaneously multiple docker containers, each container shall be connected to one product (1 to 1 relationship).
Things that are run inside containers must think that they're reaching connectivity to product by using "first" network interface (one which have static IP assignment and which cannot be changed).
All I want to do is to create containers with the same IP addresses to pretend that application inside container is using "first" ethernet interface of product and then at host level I want to redirect all traffic using IPTables to "second" interface.
Therefore I have one major problem: how to create multiple docker containers with the same IP address?
From the exact phrasing of your question, docker has the option to share the network stack of another container. Simply run:
docker run -d --name containera yourimage
docker run -d --net container:containera anotherimage
And you'll see that the second container has the same IP interfaces and can even see ports being used by the first container.
I'd recommend instead that you install both interfaces on your docker host and bind to the IP on the host that you need, then don't worry about the actual IP of the container. The result will be much simpler to manage. Here's how you bind to a single IP on the host with ports 8080 and 8888 that's mapped to two different container's port 80:
docker run -d -p 192.168.255.2:8080:80 --name nginx8080 nginx
docker run -d -p 192.168.255.2:8888:80 --name nginx8888 nginx
I'm using Docker to network together two containers and one of the containers needs to be able to access the host network for service discovery. I cannot use -net=host because that makes the other container inaccessible.
What I am looking for is essentially a way to add the host network as a "secondary" network to the docker container so it can access other containers, as well as the host network.
Hopefully that makes sense. Docker is still very new to me so I apologize if my explanation is lacking.
EDIT: To elaborate more on the kind of discovery I need, basically I am running Plex media server inside a container and PlexConnect inside another container. In order for PlexConnect to be able to detect the right IP for Plex, it needs to be able to access the 192.168 local network of the host since it serves as the DNS for an AppleTV outside the Docker network.
So containers are as follows:
Plex (bridge mode and binds to the host port 192.168.1.100:32400)
PlexConnect (separate subnet of bridge mode, needs to be able to access 192.168.1.100:32400)
tl;dr I need what BMitch suggested below but the docker-compose version.
I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?
Can I run docker container that will have access to eth1.
DSL provider is connected to eth1.
I have default internet on eth0.
I wish to docker container to dial pppoe on eth1 and apps in docker to use that internet with full access to internet without port mapping.
I don't see any reason why you cannot do what you are attempting. Add the flag
--cap-add=NET_ADMIN
to the docker run command. This will give the container sufficient privileges to create and configure interfaces.
The easiest option is to run with the host's network stack. You won't have any network isolation between containers, but eth1 will be there as if you were running a regular process.
To do this, use docker run --net=host [rest of run command]
It may also be possible to build your own bridge and link a veth from the container to the bridge to eth1. I haven't tried that, nor have I ever tried to control pppoe.