Cloudstack virtual router not starting - apache-cloudstack

Cloudstack virtual router always in stopped state, even though i have changed cpu speed to 1 ghz and ram to 512 mb. Both secondary storage vm and console proxy vm are running. Virtual router not starting due to which vm not running, vm goes into error state.

Related

VM can't ping host, and host can't ping VM or see web server

I have a Debian VM on my Windows 10 machine, in VMWare Player 16. The VM is running a web server. Both are using my router for DHCP, and the router lists the VM as a DHCP client.
The PC is on 192.168.0.20. The VM is 192.168.0.50.
The VM's networking is set to 'bridged'. It can access the internet, and it can ping various devices on my nework. But it cannot ping the PC that is hosting the VM - it shows Destination Host Unreachable.
The PC cannot ping the VM (also destination host unreachable) and cannot see the VM's web server.
The router can ping the VM, and the PC.
My phone, on the same network via WiFi, can ping the PC and also the VM.
My phone can also access the web server of the VM.
The firewall is off in both the VM and the PC.
It seems like the host PC is not allowing traffic that it sent (even though it is sending it by proxy for the VM) to reach the VM. Traffic from other sources on the network, i.e. not the PC or the VM, can get to the VM without any problem. It is as if the PC sees data coming in and says "That traffic came from me, but I have nothing listening for it so I will just ignore it."
Any ideas on how I can fix this?

Hyper-V CPU load discrepancy between host and guest

I have a setup of a Windows 10 Pro guest running under Hyper-V on a Windows 10 Pro host machine and have observed a discrepancy between what the host and guest report as the CPU load. There seems to be a constant 5% CPU load at a minimum from vmmem.exe as reported on the host even when the guest is idle. This goes up as the guest CPU load increases but doesn't seem to ever fall below 5%. The VM setup is just the defaults when creating a VM with dynamic memory and a max 16GB memory limit. The physical machine has 32GB memory and an i7-4870HQ CPU. Both the guest and host OS have latest patches installed.
An example while connected to the guest via RDP -- the guest Task Manager reports 2% CPU load, which agrees with the Hyper-V Manager on the host; meanwhile, the host's Task Manager shows vmmem.exe with a 7% load:
This seems very wasteful, surely there must be a way to bring this process to idle when the VM is idle?

GCE Network Load Balancer loops traffic back to VM

On GCE, using a Network Load Balancer (NLB), I have the following scenario:
1 VM with internal IP of 10.138.0.62 (no external IP)
1 VM with internal IP of 10.138.0.61 (no external IP)
1 NLB with a target pool (Backend) that contains both of these VMs
1 Health check that monitors a service on these VMs
The simple issue is that when one of these VMs hits the NLB IP address, the request is immediately resolved to the IP of the same instance making the request, it never gets balanced between the two VMs, it never makes it to the other VM. Even if the VM making the request has failed its health check. For example:
VM on 10.138.0.62 is in target pool of NLB and its service is healthy.
VM on 10.138.0.61 is in target pool of NLB and its service is NOT healthy.
Make a request from the second VM, on 10.138.0.61, to the NLB, and even though this same VM has failed its health check, traffic will still be delivered to itself. It's basically ignoring the fact there's a NLB and health checks entirely, and simply saying, "If the VM is in the target pool for this NLB and it attempts contact with the IP of the NLB, loop the traffic back to itself".
Note that if I remove the VM on IP 10.138.0.61 from the target pool of the NLB and try the connection again, it immediately goes through to the other VM that's still in the target pool, just like I'd expect it to. If I put the VM on IP 10.138.0.61 back in the target pool and attempt to hit the NLB, again it will only loop back to the calling machine on 10.138.0.61
Googling around a bit, I saw that this behavior happens on some versions of Windows Server and its NLB, but I didn't expect this on GCE. Have others seen the same behavior? Is this just a known behavior that I should expect? If so, any workarounds?
This is working as intended. Due to how networks are configured in a virtual environment, this will always result in the load balanced VM returning the request to itself ignoring health check status. Please check the link provided for more information.

TCP connection pauses for a long time then restarts

I have an ubuntu 18.04 vbox hosted on a windows PC and bridged to the gigabit wired connection.
From chrome running on the windows PC I invoke a graphql api on the Ubuntu VM. The graphql returns about 3MB of data and everything is working fine.
The windows PC is in a DMZ and I can invoke the graphql API from another PC in another place (remotely). Iperf3 tells me that I have about 10-15 Mbit/s bandwidth when connecting remotely.
Anyway, when I try to invoke the graphql API it happens that the TCP connection (on port 4000) after transferring about 700kB seems to pause for a long time (about 70-80 secs) and the it restarts. Sometimes all the transfer is performed but not all the times.
This is a capture that shows the "pause":
I played with the TCP window size (reducing it) to no avail.
I did several tests with different web servers, also with an Ubuntu VM using VMWARE and finally I got the problem. I had to remove "LiveQoS NDIS 6 Filter Driver" from the windows ethernet interface stack.
This "LiveQoS NDIS 6 Filter Driver" causes many problems when using virtual machines.
If you need other info I will be glad to help.

VHOST-USER interface between ovs-dpdk and a VM

I'm trying to understand the packet life-cycle in ovs-dpdk (running on host) communicating with a VM through vhost-user interface:
packet is received via physical port to the device.
DMA transfer to mempools on huge-pages allocated by dpdk-ovs - in user-space.
ovs-dpdk copies this packet to the shared-vring of the associated guest (shared between ovs-dpdk userspace process and guest)
no more copies in the guest - i.e. when any application running on the guest wants to consume the packet - there is a zero copy between the shared-vring and the guest application.
Is that correct? How is part 4 implemented ? This is a communication between OS in guest and application in guest, so how this is implemented with zero copy?
no more copies in the guest - i.e. when any application running on the guest wants to consume the packet - there is a zero copy between the shared-vring and the guest application.
Is that correct?
Not really. It is correct if you run a DPDK application in the guest. But if you run a normal kernel in the guest, there will be another copy between guest kernel and guest user space.
How is part 4 implemented ?
See above. It is true only for DPDK applications.

Resources