How IP-Aliases does work on Google Cloud Computing Instance? - networking

When setup a IP-Alias via gloud command or the interface, it works out of the box. But in the machine itself, i do not see any configuration, ip addr-entries, no firewall rules, no routes that would allow to be the machine pingable - but it's pingable (local and remote)! (for example 10.31.150.70, when you setup a 10.31.150.64/26-subnet, and you primary IP is 10.31.150.1)
On the other hand, the primary IP of the machine is a /32-Netmask. For example:
10.31.150.1/32, Gateway: 10.31.0.1/16. So, how can the machine reach the gateway, 10.31.0.1, when the gateway is out of the range?
When removing the Main-IP via ip addr del, the aliases aren't pingable anymore.

Google runs a networking daemon on your instance. It runs as the google-network-daemon service. This code is open source and viewable at this repo. This repo has a Python module called google_compute_engine which manages IP aliasing among other things. You can browse their code to understand how Google implements this (they use either ip route or ifconfig depending on the platform)
To see the alias route added by Google on a Debian box (where they use ip route underneath for aliasing) run the following command.
ip route ls table local type local dev eth0 scope host proto 66
If you know your Linux commands, you can remove appropriate routes after stopping the daemon, and then assign the alias IP address to your primary interface as the second IP address to see the ifconfig approach in action as well.

When alias IP ranges are configured, GCP automatically installs VPC network routes for primary and alias IP ranges for the subnet of the primary network interface. Alias IP ranges are routable within the GCP virtual network without requiring additional routes. That is the reason why there is no configuration on the VM itself but still it's pingable. You do not have to add a route for every IP alias and you do not have to take route quotas into account.
More information regarding Alias IP on Google Cloud Platform (GCP) can be found in this help center article.
Be aware that Compute Engine networks only support IPv4 unicast traffic and it will show the netmask as /32 on the VM. However, it will still be able to reach the Gateway of the subnet that it belongs to. For example, 10.31.0.0/16 includes hosts ranging from 10.31.0.1 to 10.31.255.254 and the host 10.31.150.1 is within that range.
To further clarify why VM instances are assigned with the /32 mask, it is important to note that /32 is an artificial construct. The instance talks to the software defined network, which creates and manages the "real" subnets. So, it is really a link between the single address and the gateway for the subnet. As long as the link layer is there, communications are established and everything works.
In addition to that, network masks are enforced at the network layer. This helps avoid generation of unnecessary broadcast traffic (which underlying network wouldn't distribute anyway).
Note that removing the primary IP will break the reachability to the metadata server and therefore the IP aliases won't be accessible.

Related

Connect to OpenStack instance via the internet through the router

I've recently found out that the external network for our OpenStack (Ocata) setup has maxed out on the available IP addresses in its allocation table. In fact, it has over-allocated with -9 free IPs. So, to manage the limited IP addresses, is it possible to access an instance in a project directly from an external network (internet) via the project's router? This way only a single IP address needs to be allocated per project instead of allocating to multiple instances per project.
The short answer would be NO, but there are couple of workarounds that came to my mind (not that they will be good, but they will work).
In case any instance in your private network has floatingIP, you can use that host as a jump-host (bastion-host) to SSH into the target host. This also brings the benefits of port forwarding/SSH tunneling to the table if you want to access to some other port.
You can always access to any host on private networks through qdhcp or qrouter namespace from the network node
ip netns exec qdhcp-XXXXXXX ssh user#internal-IP

How are external ips supposed to work in OpenShift (4.x)?

I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.

Google Cloud Platform networking: Resolve VM hostname to its assigned internal IP even when not running?

Is there any way in the GCP, to allow VM hostnames to be resolved to their IPs even when the VMs are stopped?
Listing VMs in a project reveals their assigned internal IP addresses even when the VMs are stopped. This means that, as long as the VMs aren't re-created, their internal IPs are statically assigned.
However, when our VMs are stopped, the DNS resolution stops working:
ping: my-vm: Name or service not known
even though the IP is kept assigned to it, according to gcloud compute instances list.
I've tried reserving the VM's current internal IP:
gcloud compute addresses create my-vm --addresses 10.123.0.123 --region europe-west1 --subnet default
However, the address name my-vm above is not related to the VM name my-vm and the reservation has no effect (except for making the IP unavailable for automatic assignment in case of VM re-creation).
But why?
Some fault-tolerant software will have a configuration for connecting to multiple machines for redundancy, and if at least one of the connections could be established, the software will run fine. But if the hostname cannot be resolved, this software would not start at all, forcing us to hard-code the DNS in /etc/hosts (which doesn't scale well to a cluster of two dozen VMs) or to use IP addresses (which gets hairy after a while). Specific example here is freeDiameter.
Ping uses the IP ICMP protocol. This requires that the target is running and responding to network requests.
Google Compute Engine VMs use DHCP for private IP addresses. DHCP is integrated with (communicates with) Google DNS. DHCP informs DNS about running network services (VM IP address and hostname). If the VM is shutdown, this link does not exist. DHCP/DNS information is updated/replaced/deleted hourly.
You can set up Google Cloud DNS private zones, create entries for your VPC resources and resolve private IP addresses and hostnames that persist.

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

How does open stack assign ip to virtual machines?

I want to know how does the openstack assign ip to virtual machines ? and how to find out port and ips used by the VM. Is it possible for us to find out the IP and ports being used by an application running inside the VM ?
To assign an IP to your VM you can use this command:
openstack floating ip create public
To associate your VM and the IP use the command below:
openstack server add floating ip your-vm-name your-ip-number
To list all the ports used by applications, ssh to your instance and run:
sudo lsof -i
Assuming you know the VM name
do the following:
On controller run
nova interface-list VM-NAME
It will give you port-id, IP-address and mac address of VM interface.
You can login to VM and run
netstat -tlnp to see which IP and ports being used by applications running inside the VM.
As to how a VM gets IP, it depends on your deployment. On a basic openstack deployment when you create a network and create a subnet under that network, you will see on the network node a dhcp namespace getting created. (do ip netns on network node). The namespace name would be qdhcp-network-id. The dnsmasq process running inside the dhcp namespace allots IPs to VM. This is just one of the many ways in which VM gets IP.
This particular End User page of the official documentation could be a good start:
"Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
You can assign a floating IP address to one instance at a time."
There are of course deeper layers to look at in this section of the Admin Guide
Regarding how to find out about ports and IPs, you have two options: command line interface or API.
For example, if you are using Neutron* and want to find out the IPs or networks in use with the API:
GET v2.0/networks
And using the CLI:
$ neutron net-list
You can use similar commands for ports and subnets, however I haven't personally tested if you can get information about the application running in the VM this way.
*Check out which OpenStack release you're running. If it's an old one, chances are it's using the Compute node (Nova) for networking.

Resources