OpenStack instance name resolution - openstack

I have a private OpenStack cluster running on five machines (let's call them blade1-blade5). Let's say I spin up ten instances named node01-node10. node01 through node06 spin up on blade1, then node07 through node10 each spin up with one instance on each of the remaining blades (node07 on blade2, node08 on blade3, etc). If I ssh into node01, I can ping any of the other instances spawned on blade1 using their instance names (e.g. ping node03), but if I try to ping one of the instances on another machine (e.g ping node08) I get an unknown host error. Similarly, if I ssh into one of the "singleton" instances, I can't ping any of the other instances using their names. I can ping any instance from any other instance using an IP address.
Clearly, for whatever reason OpenStack is only resolving the names of instances spawned on the same machine. Is there a way to allow name resolution for all instances, regardless of what machine they're spawned on?

Related

GCE Network Load Balancer loops traffic back to VM

On GCE, using a Network Load Balancer (NLB), I have the following scenario:
1 VM with internal IP of 10.138.0.62 (no external IP)
1 VM with internal IP of 10.138.0.61 (no external IP)
1 NLB with a target pool (Backend) that contains both of these VMs
1 Health check that monitors a service on these VMs
The simple issue is that when one of these VMs hits the NLB IP address, the request is immediately resolved to the IP of the same instance making the request, it never gets balanced between the two VMs, it never makes it to the other VM. Even if the VM making the request has failed its health check. For example:
VM on 10.138.0.62 is in target pool of NLB and its service is healthy.
VM on 10.138.0.61 is in target pool of NLB and its service is NOT healthy.
Make a request from the second VM, on 10.138.0.61, to the NLB, and even though this same VM has failed its health check, traffic will still be delivered to itself. It's basically ignoring the fact there's a NLB and health checks entirely, and simply saying, "If the VM is in the target pool for this NLB and it attempts contact with the IP of the NLB, loop the traffic back to itself".
Note that if I remove the VM on IP 10.138.0.61 from the target pool of the NLB and try the connection again, it immediately goes through to the other VM that's still in the target pool, just like I'd expect it to. If I put the VM on IP 10.138.0.61 back in the target pool and attempt to hit the NLB, again it will only loop back to the calling machine on 10.138.0.61
Googling around a bit, I saw that this behavior happens on some versions of Windows Server and its NLB, but I didn't expect this on GCE. Have others seen the same behavior? Is this just a known behavior that I should expect? If so, any workarounds?
This is working as intended. Due to how networks are configured in a virtual environment, this will always result in the load balanced VM returning the request to itself ignoring health check status. Please check the link provided for more information.

Google Cloud Platform networking: Resolve VM hostname to its assigned internal IP even when not running?

Is there any way in the GCP, to allow VM hostnames to be resolved to their IPs even when the VMs are stopped?
Listing VMs in a project reveals their assigned internal IP addresses even when the VMs are stopped. This means that, as long as the VMs aren't re-created, their internal IPs are statically assigned.
However, when our VMs are stopped, the DNS resolution stops working:
ping: my-vm: Name or service not known
even though the IP is kept assigned to it, according to gcloud compute instances list.
I've tried reserving the VM's current internal IP:
gcloud compute addresses create my-vm --addresses 10.123.0.123 --region europe-west1 --subnet default
However, the address name my-vm above is not related to the VM name my-vm and the reservation has no effect (except for making the IP unavailable for automatic assignment in case of VM re-creation).
But why?
Some fault-tolerant software will have a configuration for connecting to multiple machines for redundancy, and if at least one of the connections could be established, the software will run fine. But if the hostname cannot be resolved, this software would not start at all, forcing us to hard-code the DNS in /etc/hosts (which doesn't scale well to a cluster of two dozen VMs) or to use IP addresses (which gets hairy after a while). Specific example here is freeDiameter.
Ping uses the IP ICMP protocol. This requires that the target is running and responding to network requests.
Google Compute Engine VMs use DHCP for private IP addresses. DHCP is integrated with (communicates with) Google DNS. DHCP informs DNS about running network services (VM IP address and hostname). If the VM is shutdown, this link does not exist. DHCP/DNS information is updated/replaced/deleted hourly.
You can set up Google Cloud DNS private zones, create entries for your VPC resources and resolve private IP addresses and hostnames that persist.

What happened if compute node restarts or stop working?

I wonder what is happening when a machine that runs computation node with active VMs is shutdown due hardware malfunction or power outage. After some time restarting and returns back? Does OpenStack somehow manage "to move" VMs that were configured to that node to run on another node? What happened to networking between VMs on other nodes trying to reach VMs that were running on the shutdown node?
Does OpenStack somehow manage "to move" VMs that were configured to that node to run on another node?
Not automatically.
If your OpenStack infrastructure has been configured with a common storage system for the compute nodes, then an instance that was running on the failed node can be migrated to another node and then booted.
What happened to networking between VMs on other nodes trying to reach VMs that were running on the shutdown node?
Once the instance from the failed node has been restarted on a new node, other VMs will be able to talk to it ... using the instance's IP address.
Of course, network connections won't survive the failure. (If a compute node fails, that brings down all instances that were running on it ...)

NLB and Multiple Clusters on 1 Virtual NIC

I can't seem to find information on setting up multiple NLB clusters on a single NIC.
I've already setup my first NLB cluster. This is used to load balance traffic to web server running on two hosts. I am now looking to setup a second web server on each of these hosts. The second web server will given a unique IP address and I'm hoping to create a second NLB cluster instance to support the second web server.
I have bound a second IP address to the network card on each of my hosts. However, when I launch NLB and chose the option to add a new cluster there are no interfaces available to create the cluster.
Has anyone else attempted this?
I haven't tried setting something up quite like you describe, but we do have multiple websites running out of our single Windows Server 2008 R2 NLB cluster. The NLB interface lets you add additional IP addresses to the cluster itself, so one cluster managing multiple IP addresses should be able to do what you need. You can then assign the different IP addresses to different web sites.

How does open stack assign ip to virtual machines?

I want to know how does the openstack assign ip to virtual machines ? and how to find out port and ips used by the VM. Is it possible for us to find out the IP and ports being used by an application running inside the VM ?
To assign an IP to your VM you can use this command:
openstack floating ip create public
To associate your VM and the IP use the command below:
openstack server add floating ip your-vm-name your-ip-number
To list all the ports used by applications, ssh to your instance and run:
sudo lsof -i
Assuming you know the VM name
do the following:
On controller run
nova interface-list VM-NAME
It will give you port-id, IP-address and mac address of VM interface.
You can login to VM and run
netstat -tlnp to see which IP and ports being used by applications running inside the VM.
As to how a VM gets IP, it depends on your deployment. On a basic openstack deployment when you create a network and create a subnet under that network, you will see on the network node a dhcp namespace getting created. (do ip netns on network node). The namespace name would be qdhcp-network-id. The dnsmasq process running inside the dhcp namespace allots IPs to VM. This is just one of the many ways in which VM gets IP.
This particular End User page of the official documentation could be a good start:
"Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
You can assign a floating IP address to one instance at a time."
There are of course deeper layers to look at in this section of the Admin Guide
Regarding how to find out about ports and IPs, you have two options: command line interface or API.
For example, if you are using Neutron* and want to find out the IPs or networks in use with the API:
GET v2.0/networks
And using the CLI:
$ neutron net-list
You can use similar commands for ports and subnets, however I haven't personally tested if you can get information about the application running in the VM this way.
*Check out which OpenStack release you're running. If it's an old one, chances are it's using the Compute node (Nova) for networking.

Resources