Debugging poor I/O performance on OpenStack block device (OpenStack kolla:queen) - openstack

I have an OpenStack VM that is getting really poor performance on its root disk - less than 50MB/s writes. My setup is 10 GbE, OpenStack deployed using kolla, the Queen release, with storage on Ceph. I'm trying to follow the path through the infrastructure to identify where the performance bottleneck is, but getting lost along the way:
nova show lets me see which hypervisor (an Ubuntu 16.04 machine) the VM is running on but once I'm on the hypervisor I don't know what to look at. Where else can I look?
Thank you!

My advice is to check the performance first between host (hypervisor) and ceph , if you are able to create a ceph block device, then you will able to map it with rbd command , create filesystem, and mount it - then you can measure the device io perf with : sysstat , iostas, iotop, dstat, vmastat or even with sar

Related

Increasing req/sec on Nginx CE

I have a simple location block in nginx conf which echo's back the Server's ip. Its deployed on a 2 core 4gb ram EC2. i am able to get 400 req per second on load testing it.
Made optimizations like logs buffering, opening more FD, followed guidelines in http://www.freshblurbs.com/blog/2015/11/28/high-load-nginx-config.html.
The peak cpu load on the node is 4-5% and same for memory. I am wondering how can i blow it up even further. Will using Docker help or the cpu load & memory is irrelevant here as it might be running into network congestion. Will increasing EC2 Node size help ?
OS: CentOS. Any help appreciated. Thanks !

How many servers will be need to install OpenStack and CloudStack cluster?

If not use simulator or devstack, but use real production cluster, very necessary need will cost how many hosts(or nodes)?
CloudStack: 2 (management-servers and DBs) + 2 (Hypervisors) + 1 Storage(If you do not have a Storage Device, maybe you need a server for NFS or iSCSI)
Total: 5 servers for a minimal environment with load-balance and HA.
OpenStack: It depends on the component you have chosen. Every component can be installed in the right one server. But you need one more server for load-balance and HA.
Total: 2 servers for a minimal environment with load-balance and HA.
When planning a cloud platform, the total resource = ManagementServer*2 + Hypervisor*N + Storage(Server Or Storage Device)
Hypervisor number is the total cpus and memorys of how much vms you planned to run.
Storage is how much volumes you want to allocate for all vms.
For Cloudstack, unlike OpenStack, you can use just one physical machine or server for the installation of both the management server as well as agent (for execution of VMs) and yes, the database and NFS shares can be set up on the same machine too (assuming you need it for testing purpose).
You can follow the quick installation guide of Cloudstack here: http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.11/qig.html
I have personally installed using the above documentation and can assure you the above works fine with CentOS 7.4 too. For more complex setup and architecture you can find more documentation here: http://docs.cloudstack.apache.org. Just be sure to have some free IPs available ;)

What is the difference between cold and hot reboot in openstack

I am new to openstack for virtualization.
I can reboot instance by 2 ways: cold and hard reboot.
I can understand the difference on a physical computer, but what is the difference between cold and hot reboot on a VM ?
Thanks
Apart from the documentation here that it's already mentioned on this thread:
http://docs.openstack.org/user-guide/cli-reboot-an-instance.html
A hard-reboot also affects the virtual machine at hypervisor level. Example: If you are using libvirt-based hypervisors (qemu/kvm), the instance control file (the libvirt XML representing the virtual machine in Libvirt) get's reconstructed from scratch.
That's very usefull when for any reason the instance storage space (/var/lib/nova/instances/INSTANCE_UUID) suffers any kind of problem, or, in general for any reason that you need OpenStack to reconstruct the libvirt definitions !.
It affects both the XML libvirt definition normally stored at /etc/libvirt/qemu and the copy at /var/nova/instances/INSTANCE_UUID.
So, in resume: Use hard-reboot if you need to fully reset/reboot the instance up to Hypervisor level. As you can see, is more like a "power-cycle with steroids".
Hope this helps !!

devstack multi node installation

I have 3 nodes which i am using for multi node setup. I am thinking of following the below structure
Controller: keystone, horizon, g-reg, g-api, n-api, n-crt, n-sch, n-cond, n-cauth, n-obj, n-novnc, n-xvnc, c-api, c-sch (this node will have mysql and rabbitmq as well)
Network: q-svc, q-agt, q-dhcp, q-l3, q-meta, quantum
Compute: n-cpu, c-vol
I have a few questions. 1. In Compute node, do i need to keep n-api? Also what else is needed apart from n-api and c-vol? Is q-agt needed in compute? 2. Will i need c-api along with c-vol? Does compute node need rabbit mq installed?
Q1)
You don't want the nova-api on the compute nodes generally. It's better on the controller.
Nova api makes use of pasted hard system credentials and you don't want that paste file exposed on any node that a user may compromise with a hypervisor escape.
nova-compute and nova-volume is all you probably need. they do communicate with the scheduler over rabbitmq so make sure that's working =P
Q2)
You don't NEED cinder to run an openstack cloud, though I see no reason not to include it.
I don't know what impact disabling cinder has on the devstack stack.sh script, I've never done it.
As per RabbitMQ see above answer.

Openstack: How to decide hardware capacity?

I'm reading some OpenStack material recently, but didn't get a chance to try yet. I got the sense that Openstack could management a large number of virtual machines via API or dashboard interface. User could easily create/start virtual machines.
Then I come out a confusion. As the underlying computer hardware might vary, some computer maybe only able to host one virtual machine, some maybe ten. When user start a virtual machine, does user manually or Openstack automatically designate a hardware computer to host the virtual machine? In either case, how to decide the hardware computer's capacity? Does Openstack provide the functionality to set capacity attribute of hardware computer?
When you run OpenStack, each physical machine (which OpenStack calls compute hosts) will periodically report how many CPUs it has and how much RAM it has, as well as how many CPUs and how much RAM have been allocated to virtual machines that are currently running.
The OpenStack scheduler uses this information to determine which compute host to run a VM on. First, it checks to see if a host has enough CPUs (by applying the CoreFilter) and enough RAM (by applying the RamFilter). Compute hosts that don't have enough CPUs or RAM available won't even be considered.
Once it has a set of candidate hosts that have enough CPU and RAM, the scheduler needs to pick one of them. By default, the scheduler will use a "spread-first" strategy, allocating VMs to machines that have the most amount of CPU/RAM that isn't currently allocated to VM. It's possible to change this strategy to a "fill-first" behavior, so that the compute host with the least amount of free resources will get allocated first. This is configured by setting the nova.scheduler.least_cost.compute_fill_first_cost_fn parameter.
For more information, see the chapter on scheduling in the OpenStack Compute Admin guide.

Resources