How many servers will be need to install OpenStack and CloudStack cluster? - openstack

If not use simulator or devstack, but use real production cluster, very necessary need will cost how many hosts(or nodes)?

CloudStack: 2 (management-servers and DBs) + 2 (Hypervisors) + 1 Storage(If you do not have a Storage Device, maybe you need a server for NFS or iSCSI)
Total: 5 servers for a minimal environment with load-balance and HA.
OpenStack: It depends on the component you have chosen. Every component can be installed in the right one server. But you need one more server for load-balance and HA.
Total: 2 servers for a minimal environment with load-balance and HA.
When planning a cloud platform, the total resource = ManagementServer*2 + Hypervisor*N + Storage(Server Or Storage Device)
Hypervisor number is the total cpus and memorys of how much vms you planned to run.
Storage is how much volumes you want to allocate for all vms.

For Cloudstack, unlike OpenStack, you can use just one physical machine or server for the installation of both the management server as well as agent (for execution of VMs) and yes, the database and NFS shares can be set up on the same machine too (assuming you need it for testing purpose).
You can follow the quick installation guide of Cloudstack here: http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.11/qig.html
I have personally installed using the above documentation and can assure you the above works fine with CentOS 7.4 too. For more complex setup and architecture you can find more documentation here: http://docs.cloudstack.apache.org. Just be sure to have some free IPs available ;)

Related

Do all servers have one base OS, like in RED HAT openstack architecture?

I'm a noob learning openstack. And The resources are all over the place tbh. I came across this image and would like to know one thing,
So, Suppose I have 100TB of storage and 10 server grade processors, and ram of 1TB, do all these resources make up of only one base OS- RED hat enterprise Linux? So, they sell resources to connect all the equipment and connect to install one single OS which can comprehend them all?
And Upon this, we throw an Openstack architecture so clients can use them as needed? Do we need as many NICs or the NICs virtual?
How to scale?
As you say, you just add a server. Install RHEL or another supported Linux distro (it's best to install the same distro and version on all servers), then OpenStack and configure it. The new server will register with the OpenStack controllers and can be used for launching virtual machines immediately.
The process is a bit more involved when you run a cloud with baremetal instances (i.e. you don't launch VMs but provision physical systems), but in principle it's the same.
by definition(at consumer scale-like one laptop) we need a network interface card for one IP
This is incorrect. You can configure multiple IP addresses on a single interface, even on your PC at home, even if that PC runs Windows.
An enterprise cloud requires connecting nodes to several networks. Usually, servers have several physical NICs, bond them together, and use VLANs or other multiplexing technologies to implement the networks. See this blog (five years old, but the principles still apply today, and it's well-written) for a good example of a real-world OpenStack network architecture.
Openstack uses one big special NIC
OpenStack can be deployed in many ways. It is not a shrink-wrapped solution. It can be used on servers with single NICs, bonded NICs, VLANs, normal networks, etc. Your statement is almost correct if you think of a typical deployment and a bond interface as a "big special NIC".
If you are interested to try this out at home, see the OpenStack installation tutorial. You will learn a lot.

Debugging poor I/O performance on OpenStack block device (OpenStack kolla:queen)

I have an OpenStack VM that is getting really poor performance on its root disk - less than 50MB/s writes. My setup is 10 GbE, OpenStack deployed using kolla, the Queen release, with storage on Ceph. I'm trying to follow the path through the infrastructure to identify where the performance bottleneck is, but getting lost along the way:
nova show lets me see which hypervisor (an Ubuntu 16.04 machine) the VM is running on but once I'm on the hypervisor I don't know what to look at. Where else can I look?
Thank you!
My advice is to check the performance first between host (hypervisor) and ceph , if you are able to create a ceph block device, then you will able to map it with rbd command , create filesystem, and mount it - then you can measure the device io perf with : sysstat , iostas, iotop, dstat, vmastat or even with sar

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

byon with phycal machine, SLA is global, how to ensure that the applications are not be installed on the same machine

I hava scenario like this:
I have applications A,B,C,D..., and I hava physical machines M,N,O,P,Q...
I use byon to manage physical machine, because the physial machine is "strong", so I want to deploy several application on it, so I set the SLA is global, at this time I have a question: when application A is deployed on machine M, I deploy other application B,C,D...,whether application A,B,C,D...will install on M machine only, rather than install on machine N,O,P,Q...(in this case, the host A's pressure will be very large.)
Is this problem exist, if exists, how to resolve it? thank you very much!
It's possible to limit the number of services on a specific machine by specifying the memory required for each service. As part of the global isolation SLA You can set the amount of memory required by each service, so when there isn't enough memory left on the machine - the next one will be used.
The syntax is:
isolationSLA {
global {
instanceCpuCores 0
instanceMemoryMB 128 // each instance needs 128MB allocated for it on the VM.
useManagement true // Enables installing services on the management server. Defaults to false.
}
Please note that the above code also allows services to be installed on the management machine itself, which you can set to false.
A more detailed explanation is available here, under "Isolation SLA".

Openstack: How to decide hardware capacity?

I'm reading some OpenStack material recently, but didn't get a chance to try yet. I got the sense that Openstack could management a large number of virtual machines via API or dashboard interface. User could easily create/start virtual machines.
Then I come out a confusion. As the underlying computer hardware might vary, some computer maybe only able to host one virtual machine, some maybe ten. When user start a virtual machine, does user manually or Openstack automatically designate a hardware computer to host the virtual machine? In either case, how to decide the hardware computer's capacity? Does Openstack provide the functionality to set capacity attribute of hardware computer?
When you run OpenStack, each physical machine (which OpenStack calls compute hosts) will periodically report how many CPUs it has and how much RAM it has, as well as how many CPUs and how much RAM have been allocated to virtual machines that are currently running.
The OpenStack scheduler uses this information to determine which compute host to run a VM on. First, it checks to see if a host has enough CPUs (by applying the CoreFilter) and enough RAM (by applying the RamFilter). Compute hosts that don't have enough CPUs or RAM available won't even be considered.
Once it has a set of candidate hosts that have enough CPU and RAM, the scheduler needs to pick one of them. By default, the scheduler will use a "spread-first" strategy, allocating VMs to machines that have the most amount of CPU/RAM that isn't currently allocated to VM. It's possible to change this strategy to a "fill-first" behavior, so that the compute host with the least amount of free resources will get allocated first. This is configured by setting the nova.scheduler.least_cost.compute_fill_first_cost_fn parameter.
For more information, see the chapter on scheduling in the OpenStack Compute Admin guide.

Resources