OpenStack additional compute node set up questions - openstack

This is my first time setting up an OpenStack instance on Ubuntu and I'm having some difficulty with setting up additional compute nodes. I've set up a controller node following the devstack instructions here with the stable/xena release and I'm trying to add an additional compute node so I've gone through the set up here but I have a few questions.
The additional compute node does not show up as a hypervisor (although it shows up under the compute service list), does someone have a resource for how to add the compute node as a hypervisor?
I ran the discover_hosts tool within the devstack repo so that the compute node gets picked up by the db but what transport url and database connections should the additional compute node use? Do I copy the transport url and database connection url used by the controller node?
Does Openstack use the resources (storage, RAM, cpus) of the additional compute node to create new VMs as well?
If someone could provide advice on how to go about setting up this compute node that would be greatly appreciated.
Thanks in advance!
Note: In the comments below I mention some steps I tried so I'll just sum them up here with their results.
nova-manage cell_v2 discover_hosts --verbose gave this output:
Found 3 cell mappings.
Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': <random_string1> .
Found 0 unmapped computes in cell <random_string1>.
Getting computes from cell: <random_string2>.
Checking host mapping for compute host 'vmname': <random_string3>.
Found 0 unmapped computes in cell <random_string2>
So the command runs but I think theres an issue with how things are set up in the db since the compute node doesn't seem to be linked to a cell.
nova-manage cell_v2 list_hosts output gives 2 hosts, the controller and the vm I am trying to add but the cell name for the compute node I'm trying to add is None.
nova-manage cell_v2 list_cells output gives 3 cells, one with no name value but it has the same cell uuid as <random_string2> in the above comment with a transport url that has no /nova_cell1 ending and the db connection string is the same as cell0.
So I think there is an issue with how the compute node is trying to be added to the db?

1, try run nova-manage cell_v2 discover_hosts in controller node to discover hypervisor.
2, you should not do anything if the step 1 works.
3, yes if the step 1 works.

Related

Is it possible to set non-volatile wsrep_provider_options, pc.weight by command?

We are using MariaDB v10.5.15 and clustering of Galera-4 v26.4.11.
The cluster is in the weighted quorum mode so that the primary site has more votes than the non-primary. During a network outage, the primary site with more votes prevails and continues service without the other peer.
The system needs to undergo a regular disaster recovery examination, including switching the primary site to the other peer. So, we need to change the weight assignments for the design reasons explained above.
Within MySQL client, we can set pc.weight dynamically by command like set global wsrep_provider_options="pc.weight=2";. However, this command only changes the in-memory configuration. So, if reboot the host, the MariaDB will start again with the old value written in the configuration file /etc/my.cnf.d/server.cnf.
To make the new weights non-volatile, we need to edit the below part of the configuration file. And the edit is error-prone because the file contains many other items in the same line of wsrep_provider_options, with pc.weight in the middle.
[galera]
...
wsrep_provider_options="socket.ssl=true; socket.ssl_key=/etc/pki/galera/server-key.pem; socket.ssl_cert=/etc/pki/galera/server-cert.pem; socket.ssl_ca=/etc/pki/galera/ca-cert.pem; pc.weight=2"
...
I am wondering:
Is it possible to set the pc.weight non-volatile without editing the configuration files?
Otherwise, is it possible to separate pc.weight into another .cnf file while keeping the other items of wsrep_provider_options in the original file?
We highly appreciate your help and suggestions.
No pc.weight cannot be non-volatile without editing a configuration file.
If you put it into another configuration file and start the server with --defaults-extra-file=/path/to/other/file.cnf that would pick it up.
Another option is to start another node, even an arbitrator node, on the secondary site during DR/DR testing. That node might need more than a 2 weight.
How does the primary site know the weight of a node it hadn't seen? I'm not sure. So be careful.

OKD 4.5 - How to upgrade cluster in restricted network

I want to upgrade OKD cluster from 4.5.0-0.okd-2020-10-03-012432 to 4.5.0-0.okd-2020-10-15-235428
version in restricted network.
I could not find any steps on OKD documentation site. However, steps are present on OCP documentation site and looks straight forward.
Queries:
Is this scenario supported in OKD?
In below document at step #7, what could be corresponding step for OKD.
https://docs.openshift.com/container-platform/4.5/updating/updating-restricted-network-cluster.html#update-configuring-image-signature
Where can I get image signature for OKD? Is this step valid for OKD?
I figured it out.
I did not perform steps mentioned in https://docs.openshift.com/container-platform/4.5/updating/updating-restricted-network-cluster.html#update-configuring-image-signature
"--apply-release-image-signature" flag in "oc adm release mirror..." command creates configmap automatically.

OpenStack Live Migration

During live migration, the destination Compute Node has to perform some 'pre live migration' tasks, among them is the tap creation at the destination OVS.
I would like to know if once Nova creates such tap interface, is the port/tap status UP?.
This is the behavior I am experiencing, but I am not sure whether it is the default one or not?
In case it is, is it possible to delay such action to the 'post live-migrations tasks'. I am thinking of something like this
Pre Live-migration : Create port/tap interface at destination Compute Node, but status DOWN
Live-migration Post
Live-migration : Delete
port at source Compute Node, set port/tap interface at destination Compute Node UP
Thanks for your time
Best Regards,
Jorge Gomez

Red Hat OpenStack 10 can you generate a overcloudrc file if the original is deleted?

Is it possible to generate a new overcloudrc file? One is created during deployment but ours was deleted during some other issues.
If someone knows the command to do so....
Take some overcloudrc, change IPs with control plane VIP, you can get this from any of the controller nodes (I hope you still have stackrc, which has only undercloud IP as variable). For OS_PASSWORD, you can login to any of the overcloud controller node and get the output of "sudo hiera keystone::admin_password".
I hope this will solve your problem.

Provision 2 node-type Service Fabric ARM

I've been trying to provision a 2-node-type service fabric cluster using ARM. The secondary node type (backend) should not be exposed to the internet. For that I've created a loadbalancer with an internal IP-Address.
Everything gets provisioned correctly but I cannot get the nodes added to the cluster. From the Azure portal when I open the cluster it says it has no nodes in it even though it has the node types configured.
I have even tried downloading the template produced by the azure portal after creating a service fabric cluster. I have also executed one of the templates provided on github and I cannot still see any nodes in the cluster.
Any suggestion what I could be missing?
Thanks
Glad to hear you got that sorted. Regarding your follow-up question on deploying to the backend node-types, that's where you'd use placement constraints. When you create clusters in Azure through ARM, it automatically sets up a placement property on each node using the node type name you defined. So on your back-end nodes, assuming your node type is called "backendnode" you'll have the following placement policy defined:
NodeTypeName: backendnode
When you deploy your services, just use that as your placement constraint:
New-ServiceFabricService -ApplicationName "fabric:/myapp" -ServiceName "fabric:/myapp/myservice" -ServiceTypeName "myservicetype" -Stateful -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -PartitionSchemeSingleton -PlacementConstraint "NodeTypeName == backendnode"

Resources