Can I run docker container that will have access to eth1.
DSL provider is connected to eth1.
I have default internet on eth0.
I wish to docker container to dial pppoe on eth1 and apps in docker to use that internet with full access to internet without port mapping.
I don't see any reason why you cannot do what you are attempting. Add the flag
--cap-add=NET_ADMIN
to the docker run command. This will give the container sufficient privileges to create and configure interfaces.
The easiest option is to run with the host's network stack. You won't have any network isolation between containers, but eth1 will be there as if you were running a regular process.
To do this, use docker run --net=host [rest of run command]
It may also be possible to build your own bridge and link a veth from the container to the bridge to eth1. I haven't tried that, nor have I ever tried to control pppoe.
Related
my app live inside the docker log all in-coming traffic from 172.17.0.1
I believe it's named dock0 bridge hub??
is it possible to avoid it? to let the app inside see ourside client real ip?
I also saw a way name network type as host but the docker container will share the same ip with host?
is that possible to do it like host is 192.168.1.101 docker container is 192.168.1.102?
OK.. I found there is a solution, docker have something named macvlan driver.
basicly it's like the br0 for normal homerouter.
it can plug container into the REAL L2 bridge instead of the fake docker0 NAT ("bridge.")
but I can't use it, macvlan only support on linux host, but im using a mac.
I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.
Is it possible to have a docker container running locally to not use the hosts /etc/hosts file for dns resolution?
For instance, on my local /etc/hosts file I could set 127.0.0.1 stackoverflow.com, but within my docker container stackoverflow.com would resolve to the actual ip.
You’ll find the network configuration inside the container is identical to the host only when running container with option --network=host.
If you run container without --network option, the Docker daemon connects containers to default bridge network. Containers in this default network are able to communicate with each other using IP addresses.
List docker networks with
docker network ls
and you can inspect them by
docker network inspect bridge (last parameter is network name)
Look at Docker container networking for more details.
I had the same problem and solved it.
By default, a container inherits the DNS settings of the host, as defined in the /etc/resolv.conf configuration file. Containers that use the default bridge network get a copy of this file
Solution:
docker run --dns <your dns address, different from the host's ip>
reference:Docker container networking
I'm new to Docker (have been working with KVM earlier). The first problem I ran in to was how to configure a bridged network in Docker. I would to have a similiar configuration as a KVM bridged network. Does anyone know if this is possible?
docker run --network=host
But if what you want is to access your container from outside use the port mapping option.
docker run -p 80:80
You will access your container using the host ip and the port you specified.
Docker internally in linux use iptables to redirect the traffic from your host to the container.
Regards
I've created docker swarm with a website inside swarm, publishing port 8080 outside. I want to consume that port using Nginx running outside swarm on port 80, which will perform server name resolution and host static files.
Problem is, swarm automatically publishes port 8080 to internet using iptables, and I don't know if is it possible to allow only local nginx instance to use it? Because currently users can access site on both 80 and 8080 ports, and second one is broken (without images).
Tried playing with ufw, but it's not working. Also manually changing iptables would be a nightmare, as I would have to do it on every swarm node after every update. Any solutions?
EDIT: I can't use same network for swarm and nginx outside swarm, because overlay network is incompatible with normal, single-host containers. Theoretically I could put nginx to the swarm, but I prefer to keep it separate, on the same host that contains static files.
No, right now you are not able to bind a published port to an IP (even not to 127.0.0.1) or an interface (like the loopback interface lo). But there are two issues dealing with this problem:
github.com - moby/moby - Assigning service published ports to IP
github.com - moby/moby - docker swarm mode: ports on 127.0.0.1 are exposed to 0.0.0.0
So you could subscribe to them and/or participate in the discussion.
Further reading:
How to bind the published port to specific eth[x] in docker swarm mode
Yes, if the containers are in the same network you don't need to publish ports for containers to access each other.
In your case you can publish port 80 from the nginx container and not publish any ports from the website container. Nginx can still reach the website container on port 8080 as long as both containers are in the same Docker network.
"Temp" solution that I am using is leaning on alpine/socat image.
Idea:
use additional lightweight container that is running outside of swarm and use some port forwarding tool to (e.g. socat is used here)
add that container to the same network of the swarm service we want to expose only to localhost
expose this helper container at localhost:HOST_PORT:INTERNAL_PORT
use socat of this container to forward trafic to swarm's machine
Command:
docker run --name socat-elasticsearch -p 127.0.0.1:9200:9200 --network elasticsearch --rm -it alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
Flag -it can be removed once it can be confirmed all is working fine for you. Also add -d to run it daemonized.
Daemon command:
docker run --name socat-elasticsearch -d -p 127.0.0.1:9200:9200 --network elasticsearch --rm alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
My use case:
Sometimes I need to access ES directly, so this approach is just fine for me.
Would like to see some docker's native solution, though.
P.S. Auto-restart feature of docker could be used if this needs to be up and running after host machine restart.
See restart policy docs here:
https://docs.docker.com/engine/reference/commandline/run/#restart-policies---restart