docker network connect to host second interface - networking

I have a use-case where my Docker container's second interface needs to share the interface of the host's second network interface. Is this possible using docker network connect? If so, how would it be done?

May not be the answer, but a bit too long to explain in a comment
If I were you I would:
Start the container with --net=host
Start up the container by sharing the host stack IP:
user#host:~$ docker run --name=c0 --net=host docker-image
Plug it in into the network
With the command
user#host:~$ docker network connect mynet c0
But I just tried it and here is the error message:
Error response from daemon: Container sharing network namespace with another container or host cannot be connected to any other network
As this is not working I guess it is not (yet?) possible. I suggest you to work around your need of the host stack IP (which must be consider as insecure btw).
Why do you need the host stack IP?

Related

APP inside docker can only see client coming from IP 172.17.0.1 how to avoid it?

my app live inside the docker log all in-coming traffic from 172.17.0.1
I believe it's named dock0 bridge hub??
is it possible to avoid it? to let the app inside see ourside client real ip?
I also saw a way name network type as host but the docker container will share the same ip with host?
is that possible to do it like host is 192.168.1.101 docker container is 192.168.1.102?
OK.. I found there is a solution, docker have something named macvlan driver.
basicly it's like the br0 for normal homerouter.
it can plug container into the REAL L2 bridge instead of the fake docker0 NAT ("bridge.")
but I can't use it, macvlan only support on linux host, but im using a mac.

Bluemix container failing to network

When I try to assign a public IP address to a container of mine (this one is an Nginx proxy container, so I'm exposing ports 80 and 443) I've had pretty significant issues with getting the public IP address to actually work.
Sometimes it will hang while networking the container, and other times the networking will finish but the public IP address still doesn't show any content.
I decided to cf ic exec -it nginx bash into the container and see if I could connect to any site simply by doing something like ping 8.8.8.8 and it fails to even connect, telling me Destination Host Unreachable. I'm wondering if this has something to do with the Nginx container or if anyone else has had issues networking with Bluemix Containers?
Sounds like an issue on containers network for that tenant. In this case, only container team can assist you. So open a support request directly from your Bluemix console or you can open a new ticket here: https://support.ng.bluemix.net/gethelp/

use docker container on host network without sharing host's ip

My docker host is part of the local network 192.168.178.0/24.
Is there a way to run a container that becomes a part of the host network, but does not share the same ip as the host? So for example if the host has the ip 192.168.178.5 i'd like to provide 192.168.178.8 to the container without interfering with the docker host's network configuration.
since a docker container is by nature bound to use the networking stack of it's host, it also has to share the hosts IP to communicate with the network. For a one-container setup, the only solution should be to add a second NIC to the host and use that second NIC and the provided IP exclusively for your docker... But apart from that I don't see any solution that does not deeply mutilate the OSI model of your host's network stack and thus include some major side-effects :-/

Giving full access to eth1 to docker container

Can I run docker container that will have access to eth1.
DSL provider is connected to eth1.
I have default internet on eth0.
I wish to docker container to dial pppoe on eth1 and apps in docker to use that internet with full access to internet without port mapping.
I don't see any reason why you cannot do what you are attempting. Add the flag
--cap-add=NET_ADMIN
to the docker run command. This will give the container sufficient privileges to create and configure interfaces.
The easiest option is to run with the host's network stack. You won't have any network isolation between containers, but eth1 will be there as if you were running a regular process.
To do this, use docker run --net=host [rest of run command]
It may also be possible to build your own bridge and link a veth from the container to the bridge to eth1. I haven't tried that, nor have I ever tried to control pppoe.

Docker - access another container on the same machine via its public ip, without docker links

On a VPS with a static, publicly routable IP, I have a simple web server running (on port 8080) in a container that exports port 8080 (-p 0.0.0.0:8080:8080).
If I spin up another container on the same box and try to curl <public ip of host>:8080 it resolves the address, tries to connect but fails when making the request (it just hangs).
From the host's shell (outside containers), curl <public ip of host>:8080 succeeds.
Why is this happening? My feeling is that, somehow, the virtual network cards fail to communicate with each other. Is there a workaround (besides using docker links)?
According to Docker's advanced networking docs (http://docs.docker.io/use/networking/): "Docker uses iptables under the hood to either accept or drop communication between containers."
As such, I believe you would need to setup inbound and outbound routing with iptables. This article gives a solid description of how to do so: http://blog.codeaholics.org/2013/giving-dockerlxc-containers-a-routable-ip-address/

Resources