I wonder what is happening when a machine that runs computation node with active VMs is shutdown due hardware malfunction or power outage. After some time restarting and returns back? Does OpenStack somehow manage "to move" VMs that were configured to that node to run on another node? What happened to networking between VMs on other nodes trying to reach VMs that were running on the shutdown node?
Does OpenStack somehow manage "to move" VMs that were configured to that node to run on another node?
Not automatically.
If your OpenStack infrastructure has been configured with a common storage system for the compute nodes, then an instance that was running on the failed node can be migrated to another node and then booted.
What happened to networking between VMs on other nodes trying to reach VMs that were running on the shutdown node?
Once the instance from the failed node has been restarted on a new node, other VMs will be able to talk to it ... using the instance's IP address.
Of course, network connections won't survive the failure. (If a compute node fails, that brings down all instances that were running on it ...)
Related
I am trying to install openstack on a single node server.
I need to access Instances from internet.
I am new to openstack, so I spent some time trying to get it work correctly but without success. I tried devstack but it is not persistent after reboot.
For microstack, it is not configurable.
I need to assign Public IPs to instances. I have 2 physical networks. I tried with external network, but I don't found an option how to do that.
Did anyone successfully installed openstack on a single machine, and is there a way to expose instances to bring them puclic IPs from a pool.
Thanks in advance.
I have a separate VPC and subnet with ec2 instances running in them. I have installed avahi on one of the instances for discovering all the services in that subnet. But avahi is not able to discover this services which are running on other instances. I have tried running avahi-browse -art for discovery but getting no result. I have checked the connectivity between the instances with ping and it is good.
So any possible solutions to resolve this?
On GCE, using a Network Load Balancer (NLB), I have the following scenario:
1 VM with internal IP of 10.138.0.62 (no external IP)
1 VM with internal IP of 10.138.0.61 (no external IP)
1 NLB with a target pool (Backend) that contains both of these VMs
1 Health check that monitors a service on these VMs
The simple issue is that when one of these VMs hits the NLB IP address, the request is immediately resolved to the IP of the same instance making the request, it never gets balanced between the two VMs, it never makes it to the other VM. Even if the VM making the request has failed its health check. For example:
VM on 10.138.0.62 is in target pool of NLB and its service is healthy.
VM on 10.138.0.61 is in target pool of NLB and its service is NOT healthy.
Make a request from the second VM, on 10.138.0.61, to the NLB, and even though this same VM has failed its health check, traffic will still be delivered to itself. It's basically ignoring the fact there's a NLB and health checks entirely, and simply saying, "If the VM is in the target pool for this NLB and it attempts contact with the IP of the NLB, loop the traffic back to itself".
Note that if I remove the VM on IP 10.138.0.61 from the target pool of the NLB and try the connection again, it immediately goes through to the other VM that's still in the target pool, just like I'd expect it to. If I put the VM on IP 10.138.0.61 back in the target pool and attempt to hit the NLB, again it will only loop back to the calling machine on 10.138.0.61
Googling around a bit, I saw that this behavior happens on some versions of Windows Server and its NLB, but I didn't expect this on GCE. Have others seen the same behavior? Is this just a known behavior that I should expect? If so, any workarounds?
This is working as intended. Due to how networks are configured in a virtual environment, this will always result in the load balanced VM returning the request to itself ignoring health check status. Please check the link provided for more information.
In Ceilometer, when pollsters collect meter from VMs, it used hypervisor on compute-node. Now, I want to write new plugin for ceilometer and not use hypervisor to collect meter, I want to collect meter by a service that is installed on VMs (mean ceilometer get data from service), so I need compute-node must communicate with VMs by IP (private IP). Is there any solution to do this?
Thanks all.
In general the internal network used by your Nova instances is kept intentionally separate from the compute hosts themselves as a security precaution (to ensure that someone logged into a Nova server isn't able to compromise your host).
For what you are proposing, it would be better to adopt a push model rather than a pull model: have a service running inside your instances that would publish data to some service accessible at a routeable ip address.
I have a couple of virtual machines in one Cloud Service. They are assigned to the same VNET and have received private IP addresses in the same subnet.
I noticed that I was unable to PING from one server to another and when I started to look into it there is no connectivity whatsoever between the servers. I have disabled windows firewall on both servers but that didn't do the trick.
Just now I tried on one of the vm's to ping the internal ip address assigned to itself but it fails.
Can anyone shed some light into this? Is this expected behavior?
The reason I am looking into this right now is because we are adding a third VM to do some performance monitoring and since the other two VM's are part of a Cloud Service we cannot open endpoints to both of them using the same port and need to go directly to the internal IP's.
Thanks in advance
I had a similar issue not too long ago. I had three servers in the same vnet that were able to communicate via site-to-site VPN to my HQ but could not communicate with one another. After several hours of banging my head against the desk, I ended up just re-building the vnet and connectivity to one another was restored successfully. The vnet router feature had become corrupt and could no longer send traffic internally.
To rebuild the vnet, you'll need to delete the VM's. Keep the disks though, and you can re-build them quickly after the new vnet is back online.