Connecting to a remote MySQL database from Docker container - networking

I have two servers in AWS, both in a security group that allows all traffic on all ports between members of the security group. On one server I have a MySQL server (without docker, let's call this server the "MySQL server") and on the other server I have docker (let's call it the "Docker server"). I want to access MySQL from within a container on the docker server without having to route the traffic over the internet (I'd like to use the internal IP address of the MySQL server instead).
Is this possible? What are my options?
What I've tried so far
I've configured the MySQL server to listen on all interfaces, just for testing. This allows me to connect to the MySQL server successfully from the Docker server (using mysql client to connect to the private IP address of the MySQL server). However when I start a container a new network namespace is created so I can't access the private IP address of the MySQL server anymore.
I've tried using an ambassador container as described here but I run into the same problem, the private IP address of the MySQL server is not available from inside the ambassador container.
Example
Here's an example to illustrate the problem and what I'm trying to do.
From the Docker server (not in any container yet):
$ ping -c 1 10.0.0.155
PING 10.0.0.155 (10.0.0.155) 56(84) bytes of data.
64 bytes from 10.0.0.155: icmp_seq=1 ttl=64 time=0.777 ms
--- 10.0.0.155 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.777/0.777/0.777/0.000 ms
However trying from within a container:
$ sudo docker run --rm -it apcera/nats-ping-client ping -c 1 10.0.0.115
PING 10.0.0.115 (10.0.0.115) 56(84) bytes of data.
From 10.0.0.200 icmp_seq=1 Destination Host Unreachable
--- 10.0.0.115 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
I expect this because I know that docker creates a new private network just for the containers but I don't know enough to be able to get around what I'm trying to do.
How can I wire things to be able to access the mysql server from within a container?

Yes that's possible.
Whether a container can talk to the world is governed by two factors. The first factor is whether the host machine is forwarding its IP packets. The second is whether the host’s iptables allow this particular connection.
To check the setting on your kernel or to turn it on manually: (be sure to set to 1)
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
Docker will never make changes to your system iptables rules if you set --iptables=false when the daemon starts. Otherwise the Docker server will append forwarding rules to the DOCKER filter chain. So, be sure to not use --iptables=false

Related

TCP packet transfer when sender and receiver are same system

I am running server and client in same system. TCP protocol is used for communication between them. In a scenario where client wants to send packet to server, will it go through network infra (i.e. router, internet etc) and come to server or will it manage transfer within system and ignore network.
If you are on the server, any communication you initiate to IP addresses also on the same server will never leave the server.
You can test by installing tcpdump then running from the console/keyboard/mouse:
tcpdump -n -i enp0s5 not arp
Do not generate network traffic. Try to ssh to your account on IP 127.0.0.1 (e.g. risner#127.0.0.1).
Also try to initiate ssh to another host on the network.
Nothing should show on tcpdump, so that indicates it is not leaving the machine.

Measure local TCP traffic between two processes

Let's say I have two local processes running on my computer, client and server. They have TCP-connection on particular port, and I see it using sudo tcptrack -i lo:
Client Server State ...
127.0.0.1:43832 127.0.0.1:42999 ESTABLISHED ...
How can I measure the real network traffic (in bytes) between client and server?

gre tunnel issues - one sided communication

I have two machines:
Ubuntu 16.04 server VM (172.18.6.10)
Proxmox VE5 station (192.168.6.30)
they are communicating through a third machine that forwards packets between the two. I want to create a gre tunnel between the two machines and to do that and make it persistent I have edited the /etc/network/interfaces and added a gre interface and tunnel to be made on boot up as the following:
After they were created I have tried to ping one machine from the other to check connectivity, pinging the gre interface IP address (10.10.10.1 and 10.10.10.2). The issue is that when I ping the Proxmox machine from Ubuntu I get no feedback, but when I run tcpdump on gre1 on Porxmox I see that the packets are received and there is a ICMP reply outgoing:
When I run the ping the other way around and check it with tcpdump on the Ubuntu machine I get nothing. I understand that the issue is when packets leave Proxmox to Ubuntu via gre1 and get lost or blocked because Ubuntu can clearly send Proxmox packets but the reply never comes back. How can I fix this?
Check if you have packet forwarding enabled for the kernel of the 3rd machine that you user for the communication of the other 2 machines
Check /etc/sysctl.conf and see if you have this:
net.ipv4.ip_forward = 1
if it's commented (#) uncomment it, save the file and issue a:
sysctl -p
Then try again the pings...

Directly accessing eth0 when using Docker while Cisco AnyConnect Secure Mobility Client is providing a VPN tunnel

When VPN is active, all the traffic seems to be tunneled through csctun0.
Using a VirtualBox I am able to set up a "network bridge" to eth0, which seems to completely ignore the manipulations made by Cisco's software. The VirtualBox directly connects to my local network and accesses local network devices and the internet directly.
I want to achieve the same with Docker containers, but the Docker's bridge seems to work differently.
What is necessary to let a Docker container bypass Cisco's tunnel like a VirtualBox does?
Edit:
As suggested by #NetworkMeister I tried to use "macvlan" and followed the instructions on http://hicu.be/docker-networking-macvlan-bridge-mode-configuration but fail when I trying to send pings to the local gateway:
# docker exec -ti container0 ping -c 4 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: Destination Host Unreachable
64 bytes from 10.0.0.1: Destination Host Unreachable
64 bytes from 10.0.0.1: Destination Host Unreachable
64 bytes from 10.0.0.1: Destination Host Unreachable
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
Docker's default bridge network allows you to NAT your containers into the physical network.
To achieve what you know as "bridged network" from VirtualBox, use Pipework or, if you are cutting edge, you can try the docker macvlan driver which is, for now, experimental.
One (ugly) solution would be to run your docker container with --net=host. That way your docker container doesn't have a network interface and has the same network access as any of your physical machine, it should work.

arp response received but ICMP packets are not reaching to own host even

My environment has 2 hosts and a VM on each host. NVGRE tunnel is created, as VMs and Hosts belongs to different subnets. I am using Windows server 2012 R2 hosts and same VMs. Hosts are connected back to back. If I put VMs and Hosts in same subnet Ping works.
Both the VMs are receiving ARP requests and responses from each other. ARP cache of each VM is having dynamic entry of other VM.
BUT ICMP request packet from VM is not even seen on its Host.
You cannnot just ping from one host to another host.
To ping provider address from your host, -p option is needed.
Example:
$address = (Get-NetVirtualizationProviderAddress).ProviderAddress
ping -p $address
Please put virtualization lookup records when you need more help.
Run following commands as administrator.
Get-NetVirtualizationLookupRecord
Get-NetVirtualizationCustomerRoute
Also make sure your VM's firewall allows ICMP echo.

Resources