Openstack instances once powered off do not release CPU , Memory resources - openstack

I am new to openstack . I noticed that when instance in openstack is powered-off . It do not return CPU and memory resources back to the pool . Is this normal behaviour or am I missing anything.
I am using openstack version 5.7 .
For example: In project ABC, 10 CPUs are allocated and if create one instance with all 10 cpus and even when I power-off the instances . It will show all CPUs are utilised and unable to setup new instance.

It is normal behavior. If the CPU or Memory resources were released, there would be no guarantee that you could power the instance on again. (What if the resources had been allocated to other instances, and you couldn't shut them down?)
If you want to release an instance's resources (apart from its IP addresses) without totally destroying it, consider shelving it.

Related

Machine <IP_address> has been started with not enough memory

I am using Cloudify 2.7 with OpenStack Icehouse.
I developed a tomcat recipe and deployed it. In the orchestrator log of the cloudify console, I read the following WARNING:
2015-06-04 11:05:01,706 ESM INFO [org.openspaces.grid.gsm.strategy.ScaleStrategyProgressEventState] - [tommy.tomcat] machines SLA enforcement is in progress.; Caused by: org.openspaces.grid.gsm.machines.exceptions.ExpectedMachineWithMoreMemoryException: Machines SLA Enforcement is in progress: Expected machine with more memory. Machine <Public_IP>/<Public_IP> has been started with not enough memory. Actual total memory is 995MB. Which is less than (reserved + container) = (0MB+3800MB) = 3800MB
The Flavor of the VM is: 4GB RAM, 2vCPU, 20GB Disk
Into the cloud driver I commented the following line:
//reservedMemoryCapacityPerMachineInMB 1024
and configured the compute section related to the flavor as following:
computeTemplate
{
imageId <imageID>
machineMemoryMB 3900
hardwareId <hardwareId>
...
}
Can someone help me to pointing out the error?
Thanks.
The error message states that the actual available memory is only 995MB, which is considerably less than the expected 4GB. To clarify that:
do you run multiple services on the same machine?
maybe the VM really has less memory than expected. please run 'cat /proc/meminfo' on the started VM to verify the exact memory it has
In principle, you should not comment out any setting of reserved memory because Cloudify must take that into account - this setting is supposed to represent the memory used by the OS and other processes. additionally, the orchestrator (ESM) takes into account ~100 MB for cloudify to run freely.
So, please update machineMemoryMB to the value calculated this way:
(the number returned by 'cat /proc/meminfo') - 1024 - 100

byon with phycal machine, SLA is global, how to ensure that the applications are not be installed on the same machine

I hava scenario like this:
I have applications A,B,C,D..., and I hava physical machines M,N,O,P,Q...
I use byon to manage physical machine, because the physial machine is "strong", so I want to deploy several application on it, so I set the SLA is global, at this time I have a question: when application A is deployed on machine M, I deploy other application B,C,D...,whether application A,B,C,D...will install on M machine only, rather than install on machine N,O,P,Q...(in this case, the host A's pressure will be very large.)
Is this problem exist, if exists, how to resolve it? thank you very much!
It's possible to limit the number of services on a specific machine by specifying the memory required for each service. As part of the global isolation SLA You can set the amount of memory required by each service, so when there isn't enough memory left on the machine - the next one will be used.
The syntax is:
isolationSLA {
global {
instanceCpuCores 0
instanceMemoryMB 128 // each instance needs 128MB allocated for it on the VM.
useManagement true // Enables installing services on the management server. Defaults to false.
}
Please note that the above code also allows services to be installed on the management machine itself, which you can set to false.
A more detailed explanation is available here, under "Isolation SLA".

JVM Monitoring jar execution

I want to check the memory usage of a JAR that does some calculations. For this I want to use JVM monitor. When starting JVM monitor, I need to pick the JVM that is running my jar. But the problem is that my JAR executes so fast (<1sec) that it never shows up in the list..
Is there any way I can start the JVM without executing the JAR immediatly?
JConsole finds running applications at the time when JConsole starts . Then only the currently running applications port and host will be displayed in the list. But for such very short running applications to be shown in the list adding a wait at the end of program execution is the only choice you can do .
Also whatever the memory stats Jconsole will display will include Jconsole's memory footprint as well. So the better choice for monitoring is jvisualvm which can show memory , threads and gc statistics as well .
Alternatevely if you want to check the code cache or compilation statistics you can use -XX:+LogCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintCodeCache

Openstack: How to decide hardware capacity?

I'm reading some OpenStack material recently, but didn't get a chance to try yet. I got the sense that Openstack could management a large number of virtual machines via API or dashboard interface. User could easily create/start virtual machines.
Then I come out a confusion. As the underlying computer hardware might vary, some computer maybe only able to host one virtual machine, some maybe ten. When user start a virtual machine, does user manually or Openstack automatically designate a hardware computer to host the virtual machine? In either case, how to decide the hardware computer's capacity? Does Openstack provide the functionality to set capacity attribute of hardware computer?
When you run OpenStack, each physical machine (which OpenStack calls compute hosts) will periodically report how many CPUs it has and how much RAM it has, as well as how many CPUs and how much RAM have been allocated to virtual machines that are currently running.
The OpenStack scheduler uses this information to determine which compute host to run a VM on. First, it checks to see if a host has enough CPUs (by applying the CoreFilter) and enough RAM (by applying the RamFilter). Compute hosts that don't have enough CPUs or RAM available won't even be considered.
Once it has a set of candidate hosts that have enough CPU and RAM, the scheduler needs to pick one of them. By default, the scheduler will use a "spread-first" strategy, allocating VMs to machines that have the most amount of CPU/RAM that isn't currently allocated to VM. It's possible to change this strategy to a "fill-first" behavior, so that the compute host with the least amount of free resources will get allocated first. This is configured by setting the nova.scheduler.least_cost.compute_fill_first_cost_fn parameter.
For more information, see the chapter on scheduling in the OpenStack Compute Admin guide.

NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX.
This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application).
With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too.
Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though.
Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else?
Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(
And in fact connections are assigned to CPUs in round-robin fashion. But if one of the consumers (with its own pool) is restarted for any reason, the connections will be released on the CPUs where they were allocated (obviously), but new ones will be allocated on the next CPU according to round-robin algorithm. Thus some CPUs will become less busy, and some more. Thus imbalance.

Resources