Can't access cross VM of different zones - networking

I have 2 VM instances using the same network(default), same subnet (default), but in 2 different zones. I accessed the VM and then ping to another VM but they did not resolve! What do I have to do to make them communicate? Below is the information of the system:
Network:
- Name: default
Subnet:
- Name: default
- Network: default
- Ip range: 10.148.0.0/20
- Region: asia-southeast1
VM1:
- Subnet: default
- IP: 10.148.0.54
- Zone: asia-southeast1-c
VM2:
- Subnet: default
- IP: 10.148.0.56
- Zone: asia-southeast1-b
Please help me! thank you!

First check if the ARP is resolved for the remote VM you want to ping.
Also check if there is a firewall rule for the default network blocking the communication between the VM's.

Related

How to get source IP from a Pod in kubernetes?

I have set up a working k8s cluster.
Each node of the cluster is inside network 10.11.12.0/24 (physical network). Over this network is running a flanneld (canal) cni.
Each node has another network interface (not managed by k8s) with cidr 192.168.0.0/24
When I deploy a service like:
kind: Service
apiVersion: v1
metadata:
name: my-awesome-webapp
spec:
selector:
server: serverA
ports:
- protocol: TCP
port: 80
targetPort: 8080
externalTrafficPolicy: Local
type: LoadBalancer
externalIPs:
- 192.168.0.163
The service is accessible at http://192.168.0.163, but the Pod receives source ip: 192.168.0.163 eth0 address of the server: not my source ip (192.168.0.94).
Deployment consists of 2 pods with the same spec.
Is possible to Pods to view my source ip m?
Anyone knows how to manage it? externalTrafficPolicy: Local seems not working.
Kubernetes change the source IP with the cluster/node IPs for which the details information can be found on this document. Kubernetes has a feature to preserve the client source IP which I believe you already are already aware.
Seems like a this is a bug in Kubernetes and there is already an open bug for this issue of below command not working properly.
externalTrafficPolicy: Local
I suggest to post on the bug to get more attention on the issue.

Why do ports need to be specified twice separated by a colon?

A lot of times, I see ports described twice with a colon like in this Docker Compose file from the Docker Networking in Compose page:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
networks:
default:
# Use a custom driver
driver: custom-driver-1
I've often wondered why the "8000:8000" and not simply "8000"
Then I saw this example, which has the two ports different:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Can someone explain what this port representation means?
The first port is host's port and the second is the remote port (i.e: in the container). That expression bounds the remote port to the local port.
In the example you map container's 8080 port to host's 8080 port, but it's perfectly normal to use different ports (e.g: 48080:8080)
If the 'host' port and the ':' of the publish port is omitted, eg. 'docker run -d -p 3000 myimage'. Docker will auto assign a (high number) host port for you. You can check to see it by running 'docker ps'.

Docker-compose container using host DNS server

I'm running several containers on my "Ubuntu 16.10 Server" in a "custom" bridge network with compose 2.9 (in a yml version 2.1). Most of my containers are internally using the same ports, so there is no way for me to use the "host" network driver.
My containers are all links together, using the dedicated links attribute.
But, I also need to access services exposed outside of my containers. These services have dedicated URL with names registered in my company's DNS server.
While I have no problem to use public DNS and reach any public service from within my containers, I just can't reach my private DNS.
Do you know a working solution to use private DNS from a container? Or even better, use host's network DNS configuration?
PS: Of course, I can link to my company's services using the extra_hosts attribute in my services in my docker-compose.yml file. But... that's definitively not the goal of having a DNS. I don't want to register all my services in my YML file, and I don't want to update it each time services' IP are updated in my company.
Context :
Host: Ubuntu 16.10 server
Docker Engine: 1.12.6
Docker Compose: 1.9.0
docker-compose.yml: 2.1
Network: Own bridge.
docker-compose.yml file (extract):
version: '2.1'
services:
nexus:
image: sonatype/nexus3:$NEXUS_VERSION
container_name: nexus
restart: always
hostname: nexus.$URL
ports:
- "$NEXUS_81:8081"
- "$NEXUS_443:8443"
extra_hosts:
- "repos.private.network:192.168.200.200"
dns:
- 192.168.3.7
- 192.168.111.1
- 192.168.10.5
- 192.168.10.15
volumes_from:
- nexus-data
networks:
- pic
networks:
pic:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.18.0.0/16
gateway: 172.18.0.1
I tried with and without the ipam configuration for the pic network, without any luck.
Tests & Results:
docker exec -ti nexus curl repos.private.network
returns properly the HTML page served by this service
docker exec -ti nexus curl another-service.private.network
Returns curl: (6) Could not resolve host: another-service.private.network; Name or service not known
While curl another-service.private.network from the host returns the appropriate HTML page.
And "of course" another-service.private.network is known in my 4 DNS servers (192.168.3.7, 192.168.111.1, 192.168.10.5, 192.168.10.15).
You don't specify which environment you're running docker-compose in e.g Mac, Windows or Unix, so it will depend a little bit on what changes are needed. You also don't specify if you're using the default bridge network in docker on a user created bridge network.
In either case, by default, Docker should try and map DNS resolution from the Docker Host into your containers. So if your Docker Host can resolve the private DNS addresses, then in theory your containers should be able to as well.
I'd recommend reading this official Docker DNS documentation as it is pretty reasonable. Here for the default Docker bridge network, here for user created bridge networks.
A slight gotcha is if you're running using Docker for Mac, Docker Machine or Docker for Windows you need to remember that your Docker Host is actually the VM running on your machine and not the physical box itself, so you need to ensure that the VM has the correct DNS resolution options set. You will need to restart your containers for changes to DNS resolution to be picked up by them.
You can of course override all the default settings using docker-compose. It has full options for explicitly setting DNS servers, DNS search options etc. As an example:
version: 2
services:
application:
dns:
- 8.8.8.8
- 4.4.4.4
- 192.168.9.45
You'll find the documentation for those features here.

Ping failed to second ip in openstack instance

I have RDO openstack environment in a machine for testing. The RDO was installed with packstack --allinone command. Using HOT I have created two instances. One with cirros image and another with Fedora. The Fedora instance have two interfaces that are connected to same network while cirros have only one interface and connected to same network. The template looks like this -
heat_template_version: 2015-10-15
description: Simple template to deploy two compute instances
resources:
local_net:
type: OS::Neutron::Net
local_signalling_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: local_net }
cidr: "50.0.0.0/24"
ip_version: 4
fed:
type: OS::Nova::Server
properties:
image: fedora
flavor: m1.small
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: fed_port1 }
- port: { get_resource: fed_port2 }
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fed_port2:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
cirr:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: cirr_port }
cirr_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
The Fedora instance got two ips (50.0.0.3 and 50.0.0.4). Cirros got ip 50.0.0.5. I can ping 50.0.0.3 from cirros instance but not the ip 50.0.0.4. If I manually down the interface with ip 50.0.0.3 in the Fedora instance, then only I can ping 50.0.0.4 from cirros instance. Is there a restriction in the configuration of neutron that prohibits ping to both the ips of Fedora instance at same time. Please help.
This happens because of the default firewall-ing done by OpenStack networking (neutron) -- it simply drops any packets received on a port if the source address of the packet does not match the IP address assigned to the port.
When cirros instance sends ping packet to 50.0.0.4, fedora instance receives it on the interface with IP address 50.0.0.4. However, when it is responding back to cirros's IP address 50.0.0.5, the linux networking stack on your fedora machine has two interfaces to choose from to send out the response (because both those interfaces are connected to the same network). In your case, fedora choose to respond back on on 50.0.0.3. However, the source IP address in the packet is still 50.0.0.4, and thus the OpenStack networking layer simply drops it.
General recommendation is to not have multiple interfaces on the same network. If you want multiple IP addresses from the same network for your VM, you can use "fixed_ips" option in your heat template:
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fixed_ips:
- ip_address: "50.0.0.4"
- ip_address: "50.0.0.3"
Since DHCP server would offer only IP address, fedora would be configured with only one IP. You can add another IP to your interface using "ip addr add" command (see http://www.unixwerk.eu/linux/redhat/ipalias.html):
ip addr add 50.0.0.3/24 brd + dev eth0 label eth0:0

Docker container cannot resolve request to service in another container

I'm running gitlab-ce and gitlab-ci-multi-runner in separated docker containers, but on the same server.
Gitlab CE works fine, I can access it via browser and clone projects using both http and ssh.
However my runner cannot connect to Gitlab using domain/server ip. It can connect to it only via local docker network (for example using ip address 172.17.0.X or, if linked, by using service alias).
Ping to domain/server ip returns response.
I tried to link it as gitlab:example.domain.com but it didn't work, as somehow runner resolved server ip address instead of local network address
Checking for builds... failed: couldn't execute POST against http://example.domain.com/ci/api/v1/builds/register.json: Post http://example.domain.com/ci/api/v1/builds/register.json: dial tcp server.ip:80: i/o timeout
#Edit
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:8.2.2-ce.0
hostname: domain.name
privileged: true
volumes:
- ./gitlab-config:/etc/gitlab
- ./gitlab-data:/var/opt/gitlab
- ./gitlab-logs:/var/log/gitlab
restart: always
ports:
- server.ip:22:22
- server.ip:80:80
- server.ip:443:443
runner:
image: gitlab/gitlab-runner:alpine
restart: always
volumes:
- ./runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
I have no clue what's the issue here.
I'd appreciate your help.
Thanks in advance! :)
Seems like it was a firewall problem. Unlocking docker0 interface allowed traffic from containers :)

Resources