Hyper-V CPU load discrepancy between host and guest - cpu-usage

I have a setup of a Windows 10 Pro guest running under Hyper-V on a Windows 10 Pro host machine and have observed a discrepancy between what the host and guest report as the CPU load. There seems to be a constant 5% CPU load at a minimum from vmmem.exe as reported on the host even when the guest is idle. This goes up as the guest CPU load increases but doesn't seem to ever fall below 5%. The VM setup is just the defaults when creating a VM with dynamic memory and a max 16GB memory limit. The physical machine has 32GB memory and an i7-4870HQ CPU. Both the guest and host OS have latest patches installed.
An example while connected to the guest via RDP -- the guest Task Manager reports 2% CPU load, which agrees with the Hyper-V Manager on the host; meanwhile, the host's Task Manager shows vmmem.exe with a 7% load:
This seems very wasteful, surely there must be a way to bring this process to idle when the VM is idle?

Related

How Do I Remote Desktop to a VMWare Windows 10 VM, not the base machine?

I have a PC running Windows 10. On that PC I have VMWare hosting a Windows 10 VM. I can run the VM without issue from the local machine. The VM has a typical Windows PC Name, different from the base machine.
When I try to make a Remote Desktop connection from a different PC to the VM using the VM PC Name, it connects to the base machine. I can see the VM running on the base machine and control it.
I need to be able to run several VM's on this base machine and then use RDP to run remote desktop sessions on the VM's.
Other configuration info:
The VM Network is configured as NAT and I have followed the instructions here
(https://kb.vmware.com/s/article/1018809)
If I change to Bridged for the Network then I can ping my other PC from the VM if I set up a
fixed IP address - nothing if I try DHCP but that may be due to company network constraints.
In Bridged mode, I can't ping back to the VM from my other PC. (Edit: fixed, this was just Network Discovery and Firewall settings)
I need this system running on Windows 10 as our IT department doesn't want to support my application (even though they agree to it being used) which means I can't go to a Windows Server option. Also, the VM's need to be Windows 10 for application compatibility.
All the equipment under test is in the same LAN subnet and on a single, dumb switch.
Any help would be appreciated.
Launch the menu item VM > Settings.
Search the start menu for command prompt from within the virtual machine. Enter ipconfig in the prompt and search for a value following the IPv4 Address. Record this address for later use.
Now select the menu item Edit > Virtual Network Editor.
Select the NAT network type and then choose NAT Settings.
From this new prompt, click Add to include a new port forwarder.
Enter the following information: Host Port: 9997, Type: TCP, Virtual machine IP address: Enter the IP you recorded in Step 2.
note: This port number is 3389 by default, Save any open prompts so the configuration changes can take place.
The final step is to enable RDP connections from within the operating system itself.

TCP connection pauses for a long time then restarts

I have an ubuntu 18.04 vbox hosted on a windows PC and bridged to the gigabit wired connection.
From chrome running on the windows PC I invoke a graphql api on the Ubuntu VM. The graphql returns about 3MB of data and everything is working fine.
The windows PC is in a DMZ and I can invoke the graphql API from another PC in another place (remotely). Iperf3 tells me that I have about 10-15 Mbit/s bandwidth when connecting remotely.
Anyway, when I try to invoke the graphql API it happens that the TCP connection (on port 4000) after transferring about 700kB seems to pause for a long time (about 70-80 secs) and the it restarts. Sometimes all the transfer is performed but not all the times.
This is a capture that shows the "pause":
I played with the TCP window size (reducing it) to no avail.
I did several tests with different web servers, also with an Ubuntu VM using VMWARE and finally I got the problem. I had to remove "LiveQoS NDIS 6 Filter Driver" from the windows ethernet interface stack.
This "LiveQoS NDIS 6 Filter Driver" causes many problems when using virtual machines.
If you need other info I will be glad to help.

Cloudstack virtual router not starting

Cloudstack virtual router always in stopped state, even though i have changed cpu speed to 1 ghz and ram to 512 mb. Both secondary storage vm and console proxy vm are running. Virtual router not starting due to which vm not running, vm goes into error state.

VHOST-USER interface between ovs-dpdk and a VM

I'm trying to understand the packet life-cycle in ovs-dpdk (running on host) communicating with a VM through vhost-user interface:
packet is received via physical port to the device.
DMA transfer to mempools on huge-pages allocated by dpdk-ovs - in user-space.
ovs-dpdk copies this packet to the shared-vring of the associated guest (shared between ovs-dpdk userspace process and guest)
no more copies in the guest - i.e. when any application running on the guest wants to consume the packet - there is a zero copy between the shared-vring and the guest application.
Is that correct? How is part 4 implemented ? This is a communication between OS in guest and application in guest, so how this is implemented with zero copy?
no more copies in the guest - i.e. when any application running on the guest wants to consume the packet - there is a zero copy between the shared-vring and the guest application.
Is that correct?
Not really. It is correct if you run a DPDK application in the guest. But if you run a normal kernel in the guest, there will be another copy between guest kernel and guest user space.
How is part 4 implemented ?
See above. It is true only for DPDK applications.

Network adapter not working after a couple of minutes Windows Server 2012

we have been trying to solve a strange problem for the last 2 days but after a lot of searching we are stuck at the same point. We previously had Windows Server 2012 and it was working great, no problems, but decided to upgrade to R2 and that's where all our problems started.
Server:
HP PorLiant ML310e Gen8
2 Network cards ( Broadcom NetExtreme Gigabit Ethernet )
Windows Server 2012 R2
Clients:
Windows 8.1 Pro
We use one of the network cards for the server and the other for a virtual machine in Hyper-V. When the server was updated, all users, groups and permissions where created and assigned, so every member of the network could join their computers with their new users and passwords (no problem here), but when clients try to access the shared folders of the network they are unable to do so. And they can't ping the server.
So, the deal is that when the server is just started (or restarted) every client can see the network directories, can ping the server, everything works just fine for 2 or 3 minutes, then the network falls apart and there is no way for us to bring it back up other than restarting the server, but again it only works for like 3 minutes.
If we try to ping the server's IP address we get the 'General Failure' Message.
We have tried:
Enabling and disabling network adapters
Changing the order of the network adapters
Hyper-V is not being started
Disabling Network Load Balancing (NLB)
Disabling Large Send Offload (LSO) both with netsh and in the card's properties
Change the network adapter static IP
Disabling IPv6
Disabling the 'Allow the computer to turn off this device to save power'
Also noted that the server is getting several IP addresses from the DHCP. We have Microsoft Dynamics CRM 2013, and SQL Server 2012 installed.
Can any of you guys please help us with this situation? we'll be very grateful :)
Thanks in advance!
Greetings!
Ok,so this was an ol' windows trick... no matter what configuration we tried, windows server kept taking down the network minutes after it was started, so we:
Completely uninstall both network adapters
Restarted the server
Did the standar network adapter configuration (static IP addres, network, gateway, set the virtual switching for Hyper-V)
And everything started working again. So we kept the same configuration as before; Windows just needed to install the network adapters again.
Greetings!

Resources