Replace openstack controller node? - openstack

CPU utilization is high on openstack controller node.
I want to know how to replace the controller node or any other way to fix this issue.
I dont find any documentation for this online. Need help.

Add a new Node as a Controller node, HA setup
After new Controller works and enough (a strong Node), deactivate first one.

Related

OpenStack additional compute node set up questions

This is my first time setting up an OpenStack instance on Ubuntu and I'm having some difficulty with setting up additional compute nodes. I've set up a controller node following the devstack instructions here with the stable/xena release and I'm trying to add an additional compute node so I've gone through the set up here but I have a few questions.
The additional compute node does not show up as a hypervisor (although it shows up under the compute service list), does someone have a resource for how to add the compute node as a hypervisor?
I ran the discover_hosts tool within the devstack repo so that the compute node gets picked up by the db but what transport url and database connections should the additional compute node use? Do I copy the transport url and database connection url used by the controller node?
Does Openstack use the resources (storage, RAM, cpus) of the additional compute node to create new VMs as well?
If someone could provide advice on how to go about setting up this compute node that would be greatly appreciated.
Thanks in advance!
Note: In the comments below I mention some steps I tried so I'll just sum them up here with their results.
nova-manage cell_v2 discover_hosts --verbose gave this output:
Found 3 cell mappings.
Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': <random_string1> .
Found 0 unmapped computes in cell <random_string1>.
Getting computes from cell: <random_string2>.
Checking host mapping for compute host 'vmname': <random_string3>.
Found 0 unmapped computes in cell <random_string2>
So the command runs but I think theres an issue with how things are set up in the db since the compute node doesn't seem to be linked to a cell.
nova-manage cell_v2 list_hosts output gives 2 hosts, the controller and the vm I am trying to add but the cell name for the compute node I'm trying to add is None.
nova-manage cell_v2 list_cells output gives 3 cells, one with no name value but it has the same cell uuid as <random_string2> in the above comment with a transport url that has no /nova_cell1 ending and the db connection string is the same as cell0.
So I think there is an issue with how the compute node is trying to be added to the db?
1, try run nova-manage cell_v2 discover_hosts in controller node to discover hypervisor.
2, you should not do anything if the step 1 works.
3, yes if the step 1 works.

Corda Accounts - Ability to move an account to a different host node

In the Corda accounts library, in order to change the host "ownership" of the account from one node to another, one would need to change the Host in the AccountInfo state to the new host (node), along with share all vault states relevant to this account.
AccountInfo doesn't have an Update command (AccountInfo commands), meaning you cannot change the host once it is created.
Has this feature been excluded for any reason? Are there any plans to introduce an Update command (with supporting flows)?
What steps would be involved in a move/transfer (host ownership)? And what are the potential caveats around this implementation?
Amol there will be work done on this in the future but as of now, there are two options which could help you resolve your issue.
Set up a new account on the new node. Generate new key(s) for the new account. Spend all the states from the old account to this new account.
If you control the all the keys used to participate in your states and can migrate them to the new node then you just need to import the key pairs somehow and then copy all the states across from the old node to the new node.
Hope that helps.

Upgrading Corda Flow causes error on next run: TransactionVerificationException$ContractConstraintRejection

As mentioned in the docs on performing flow upgrades, all you need to do is basically shut down the node, replace JAR, and start the node back up. When I do this, when my upgraded flow is run the next time, I get the following error:
net.corda.core.contracts.TransactionVerificationException$ContractConstraintRejection: Contract constraints failed for com.company.project.contract.MyContract, transaction: ABCDEFG
And the flow does not complete as a result. What am I doing wrong?
As my experience it seem like Corda flow upgrade not update network parameter (state still belong to old hash, old contract). Then when replace with new contract it will be contract constraint.
So I think you have 3 way to manage this
For local network bootstrap, update network parameter before doing flow upgrade (I use network-bootstrap.jar for copy new contract to cordapp folder, then it will append new contract hash immediately)
For Corda network, you must contact network operator for update new hash.
Use SignatureConstraint of Corda4 (they claim that it's upgrade easier but I didn't try yet)
Hope this help

OpenStack Live Migration

During live migration, the destination Compute Node has to perform some 'pre live migration' tasks, among them is the tap creation at the destination OVS.
I would like to know if once Nova creates such tap interface, is the port/tap status UP?.
This is the behavior I am experiencing, but I am not sure whether it is the default one or not?
In case it is, is it possible to delay such action to the 'post live-migrations tasks'. I am thinking of something like this
Pre Live-migration : Create port/tap interface at destination Compute Node, but status DOWN
Live-migration Post
Live-migration : Delete
port at source Compute Node, set port/tap interface at destination Compute Node UP
Thanks for your time
Best Regards,
Jorge Gomez

Red Hat OpenStack 10 can you generate a overcloudrc file if the original is deleted?

Is it possible to generate a new overcloudrc file? One is created during deployment but ours was deleted during some other issues.
If someone knows the command to do so....
Take some overcloudrc, change IPs with control plane VIP, you can get this from any of the controller nodes (I hope you still have stackrc, which has only undercloud IP as variable). For OS_PASSWORD, you can login to any of the overcloud controller node and get the output of "sudo hiera keystone::admin_password".
I hope this will solve your problem.

Resources