Even though internet connection is working properly, traceroute only shows node IP. Why?
Kubernetes version
1.21
Set -p 443 will result to host not to process the probe along the route. You can try kubectl run busybox --image busybox --restart Never -it --rm -- traceroute -4 -l -v -m 30 google.com. This command will show you the IP along the route; presumed your cluster has no network policy or subnet security group blocking the way.
Related
I want to communicate two containers in docker, I'm using netcat for the test. First I have created the Dockefile:
FROM ubuntu
WORKDIR /root
RUN apt-get update && apt-get install netcat iputils-ping -y
And added the image with:
docker build . -t ubuntu_netcat
Also I have created a new network:
docker network create --driver bridge nettest
Then I run two containers:
docker run --net=nettest --expose=8080 -it --name pc1 ubuntu_netcat
docker run --net=nettest --link=pc1 -it --name pc2 ubuntu_netcat
At first container (pc1) I listen on port 8080 with netcat command:
nc -vlk 8080
And I expect to communicate with it from the second container (pc2) executing:
nc -v pc1 8080
But I just got a connection refused:
root#c592b2015439:~# nc -v pc1 8080
pc1.nettest [172.18.0.2] 8080 (?) : Connection refused
I have been looking at the docker docs but all seems to be correct. In fact I can perform a ping between containers sucessfully, so they can reach one other, but I have something wrong with ports.
¿What am I doing wrong?
Thanks
It looks like this version of netcat on Ubuntu doesn't listen like it normally does. You have to specify -p for the port (even though the options would appear to have port as a positional option).
Your netcat listener command should be:
nc -vlkp 8080
We have a dockerized server application that is doing auto-discovery of physical appliances on the network by listening for multicast packets on port 6969. So we need our docker container to be able to receive these packets from devices outside the host, through the host, and in to the container. I've seen some similar issues and done a lot of reading but I'm still unable to get the server to respond to these multicast packets.
I'm sitting on Wireshark watching network traffic, but I'm not a specialist. I know Docker creates a MASQUERADE address to make the traffic all look like it's coming from the Docker gateway, so when I watch veth I see mostly talk between 172.17.0.1 and 172.17.0.2 although my server is unable to retrieve any information about the devices on the network. (If I run outside of docker, I have no issues of course.)
I can't use --net=host as, like others, we make use of the --link feature. I've tried the following variations...
docker run --name app -p 6969:6969 -d me/app:latest
docker run --name app -p 0.0.0.0:6969:6969 -d me/app:latest (This one I could have sworn worked once but now doesn't?)
docker run --name app -p 0.0.0.0:6969:6969/udp -d me/app:latest
docker run --name app -p 255.255.255.255:6969:6969 -d me/app:latest
Any help or insight you could provide would be greatly appreciated.
Try to enable multicast on your nics:
ip link set eth0 multicast on
echo 1 >/proc/sys/net/ipv4/ip_forward to turn on IP forwarding
You need to explicitly set or at least check that it is enabled on relevant interfaces.
net.ipv4.conf.all.mc_forwarding = 1
net.ipv4.conf.eth0.rp_filter=0
Allow the multicast traffic:
iptables -I INPUT -d 224.0.0.0/4 -j ACCEPT
iptables -I FORWARD -d 224.0.0.0/4 -j ACCEPT
Also you might need to add the route for multicast traffic:
route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
Change the TTL of the multicast sender:
iptables -t mangle -A OUTPUT -d <group> -j TTL --ttl-set 128
Where group is the multicast group address of the stream you want to change the TTL of.
Also you can start multicast proxy
PS:
You should try (if above doesn't help) to start docker container with --net=none option and use pipework with follow command:
pipework docker0 -i eth0 CONTAINER_ID IP_ADDRESS/IP_MASK#DEFAULT_ROUTE_IP
which creates eth0 interface inside container with IFF_MULTICAST flag and defined IP address.
I have created a Docker multi-host network using Docker Overlay network with 4 nodes: node0, node1, node2, and node3. Node0 act as key-value store which shares the information of nodes while node1, node2, and node3 are bound to the key-value store.
Here are node1 networks:
user#node1$ docker network ls
NETWORK ID NAME DRIVER
04adb1ab4833 RED overlay
[ . . ]
As for node2 networks:
user#node2$ docker network ls
NETWORK ID NAME DRIVER
04adb1ab4833 RED overlay
[ . . ]
container1 is running on node1, that hosts the RED-named network.
user#node1$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9bacac3c01d ubuntu "/bin/bash" 3 hours ago Up 2 hours container1
Docker added an entry to /etc/hosts for each container that belongs to the RED overlay network.
user#node1$ docker exec container1 cat /etc/hosts
10.10.10.2 d82c36bc2659
127.0.0.1 localhost
[ . . ]
10.10.10.3 container2
10.10.10.3 container2.RED
From node2, I'm trying to access the container1 running on node1. I tried to run container1 using command below but it returns error.
`user#node2$ docker docker exec -i -t container1 bash`
Error response from daemon: no such id: container1
Any suggestion?
Thanks.
The network is shared only for the containers.
While the network is shared among the containers across the multi-hosts overlay, the docker daemons cannot communicate between them as is.
The user#_node2_$ docker exec -i -t container1 bash doest not work because, indeed, no such id: container1 are running from node2.
Accessing remote Docker daemon
Docker daemons communicate through socket. UNIX socket by default, but it is possible to add an option, --host to specify other sockets the daemon should bind to.
See the docker daemon man page:
-H, --host=[unix:///var/run/docker.sock]: tcp://[host:port] to bind or unix://[/path/to/socket] to use.
The socket(s) to bind to in daemon mode specified using one or more
tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.
Thus, it is possible to access from any node a docker daemon bind to a tcp socket.
The command user#node2$ docker -H tcp://node1:port exec -i -t container1 bash would work well.
Docker and Docker cluster (Swarm)
I do not know what you are trying to deploy, maybe just playing around with the tutorials, and that's great! You may be interested to look into Swarm that deploys a cluster of docker. In short: you can use several nodes as it they were one powerful docker daemon access through a single node with the whole Docker API.
I am trying to host a simple static site using the Docker Nginx Image from Dockerhub: https://registry.hub.docker.com/_/nginx/
A note on my setup, I am using boot2docker on OSX.
I have followed the instructions and even I cannot connect to the running container:
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker build -t wargames-front-end .
Sending build context to Docker daemon 813.6 kB
Sending build context to Docker daemon
Step 0 : FROM nginx
---> 42a3cf88f3f0
Step 1 : COPY app /usr/share/nginx/html
---> Using cache
---> 61402e6eb300
Successfully built 61402e6eb300
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker run --name wargames-front-end -d -p 8080:8080 wargames-front-end
9f7daa48a25bdc09e4398fed5d846dd0eb4ee234bcfe89744268bee3e5706e54
MacBook-Pro:LifeIT-war-games-frontend ryan$ curl localhost:8080
curl: (52) Empty reply from server
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f7daa48a25b wargames-front-end:latest "nginx -g 'daemon of 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp wargames-front-end
Instead of localhost, use boot2docker ip. First do boot2docker ip and use that ip:
<your-b2d-ip>:8080. Also you need to make sure you forwarded your port 8080 in VirtualBox for boot2docker.
Here is the way to connect nginx docker container service:
docker ps # confirm nginx is running, which you have done.
docker port wargames-front-end # get the ports, for example: 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp
boot2docker ip # get the IP address, for example: 192.168.59.103
So now, you should be fine to connect to:
http://192.168.59.103:8080
https://192.168.59.103:8080
Here's how I got it to work.
docker kill wargames-front-end
docker rm wargames-front-end
docker run --name wargames-front-end -d -p 8080:80 wargames-front-end
Then I went to my virtualbox and setup these settings:
I'm trying to set up a docker dnsmasq container so that I can have all my docker containers look up the domain names rather than having hard-coded IPs (if they are on the same host). This fixes an issue with the fact that one cannot alter the /etc/hosts file in docker containers, and this allows me to easily update all my containers in one go, by altering a single file that the dnsmasq container references.
It looks like someone has already done the hard work for me and created a dnsmasq container. Unfortunately, it is not "working" for me. I wrote a bash script to start the container as shown below:
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p='127.0.0.1:53:5353/udp' \
-d sroegner/dnsmasq
Before running that, I created the dnsmasq.hosts directory and inserted a single file within it called hosts.txt with the following contents:
192.168.1.3 database.mydomain.com
Unfortunately whenever I try to ping that domain from within:
the host
The dnsmasq container
another container on the same host
I always receive the ping: unknown host error message.
I tried starting the dnsmasq container without daemon mode so I could debug its output, which is below:
dnsmasq: started, version 2.59 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN
dnsmasq: reading /etc/resolv.dnsmasq.conf
dnsmasq: using nameserver 8.8.8.8#53
dnsmasq: read /etc/hosts - 7 addresses
dnsmasq: read /dnsmasq.hosts//hosts.txt - 1 addresses
I am guessing that I have not specified the -p parameter correctly when starting the container. Can somebody tell me what it should be for other docker containers to lookup the DNS, or whether what I am trying to do is actually impossible?
The build script for the docker dnsmasq service needs to be changed in order to bind to your server's public IP, which in this case is 192.168.1.12 on my eth0 interface
#!/bin/bash
NIC="eth0"
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
MY_IP=$(ifconfig $NIC | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $1}')
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p=$MY_IP:53:5353/udp \
-d sroegner/dnsmasq
On the host (in this case ubuntu 12), you need to update the resolv.conf or /etc/network/interfaces file so that you have registered your public IP (eth0 or eth1 device) as the nameserver.
You may want to set a secondary nameserver to be google for whenever the container is not running, by changing the line to be dns-nameservers xxx.xxx.xxx.xxx 8.8.8.8 E.g. there is no comma or another line.
You then need to restart your networking service sudo /etc/init.d/networking restart if you updated the /etc/network/interfaces file so that this auto updates the /etc/resolve.conf file that docker will copy to the container during the build.
Now restart all of your containers
sudo docker stop $CONTAINER_ID
sudo docker start $CONTAINER_ID
This causes their /etc/resolv.conf files update so they point to the new nameserver settings.
DNS lookups in all your docker containers (that you built since making the changes) should now work using your dnsmasq container!
As a side note, this means that docker containers on other hosts can also take advantage of your dnsmasq service on this host as long as their host's nameserver settings is set to using this server's public IP.