I need to reproduce MOS 9.2 installation.
So, previously MOS 9.2 was installed on 7 baremetal servers with such roles:
2 - compute
3 - virt (Looks like 3 controllers were deployed as a virtuals)
2 - ceph
I've successfully installed fuel master, updated it to 9.2, created environment and now I need to add nodes with appropriate roles, but when I'm trying to assign role VIRT to 3 physical servers I'm getting an error:
# fuel2 env add nodes -e 6 -n 9 -r virt
400 Client Error: Bad Request for url: http://MYIP:8000/api/v1/clusters /6/assigment/ (Role 'virt' restrictions mismatch: )
When I'm trying to define 3 nodes:
# fuel2 env add nodes -e 6 -n 9,10,11 -r virt
fuel2 env add nodes: error: argument -n/--nodes: invalid int value: '9,10,11'
Also I didn't find role 'virt' in Fuel web UI
I fixed this issue by editing /etc/nailgun/settings.yaml:
FEATURE_GROUPS:
- "advanced"
Related
I am implementing an application using MPICH (sudo apt get mpich) on Linux (Ubuntu).
My current solution looks like this:
HYDRA_TOPO_DEBUG=1 mpiexec.hydra -n 3 -bind-to core:1 MyApp
...
process 0 binding: 10001000
process 1 binding: 01000100
process 2 binding: 00100010
What I want, however, is assigning one process to 4 cores, while the other two are assigned to 2. I want an affinity map that looks like this:
process 0 binding: 11001100
process 1 binding: 00100010
process 2 binding: 00010001
Using SMPD on Windows, I was able to obtain the required result using sth like this:
mpiexec -n 1 -host localhost --bind-to core:2 MyApp : -n 2 -host localhost --bind-to core:1 MyApp
This however does not work with Hydra. I read every manual by now and would be happy regarding any help - even if its another hydra manual that I did not read yet. Cheers!
The "user" keyword can be used to assign logical cores manually.
Hence, one can write:
HYDRA_TOPO_DEBUG=1 mpiexec.hydra -n 3 -bind-to user:0+1+4+5,2+6,3+7 MyApp
Then, I obtain:
process 0 binding: 11001100
process 1 binding: 00100010
process 2 binding: 00010001
i have installed node controller on centos 7. And I am running command systemctl and it is showing that eucalyptus-node service is active and running but eucalyptus-node-keygen.service is failed. How do I fix this issue?
The eucalyptus-node-keygen.service generates keys that are used with instance migration. The service runs conditionally to generate the keys when required, if keys are present then they do not need to be generated.
# systemctl cat eucalyptus-node-keygen.service | grep Condition
ConditionPathExists=|!/etc/pki/libvirt/servercert.pem
#
# stat -t /etc/pki/libvirt/servercert.pem
/etc/pki/libvirt/servercert.pem 1298 8 81a4 0 0 fd00 833392 1 0 0 1582596904 1582596904 1582596904 0 4096 system_u:object_r:cert_t:s0
so typically this service will show "start condition failed" which is not an error, and no action is required.
I have installed eucalyptus 4.4.4 on Centos 7 and I have already done all installation steps but it is still showing "imagingbackend" as not ready.
The imaging service is not always required when using eucalyptus. Particularly for smaller deployments it can be a good choice to skip configuration of the imaging service and save the resources (virtual machine instances) that would have been used by the imaging service for user workloads.
To enable the imaging service you need to install and register the service image (v5 output shown):
# esi-describe-images
SERVICE VERSION ACTIVE IMAGE INSTANCES
imaging 5.0.100 * emi-b54e3b35170d2c56e 1
loadbalancing 5.0.100 * emi-b54e3b35170d2c56e 0
and create the stack:
# esi-manage-stack -a check
Stack 'euca-internal-imaging-service' currently is in CREATE_COMPLETE state.
the steps to get to this state are covered in the documentation:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configure_imaging_service.html
You can also use esi-manage-stack to delete / create the imaging stack:
# esi-manage-stack -a delete
services.imaging.worker.configured = false
# esi-manage-stack -a create
Stack 'euca-internal-imaging-service' currently is in DELETE_IN_PROGRESS state. Please wait till the end of previous stack change operation.
# esi-manage-stack -a create
services.imaging.worker.configured = true
#
If you have gone through these steps but the service is not enabled you should do basic checks to verify that you can run any instances in your cloud:
# euca-describe-instance-types --show-capacity
# euserv-describe-events
and also check the log for errors /var/log/eucalyptus/cloud-debug.log
I have a cephfs and I need to mount this file system.
I have two pools cephfs_data and cephfs_meta.
ceph -s output is:
cluster:
id: 9f3e7f80-4515-4b5f-92f0-4eb49f3cbf44
health: HEALTH_OK
services:
mon: 2 daemons, quorum mon1,osd0
mgr: osd0(active), standbys: mon1
mds: mycephfs-1/1/1 up {0=mon1=up:active}
osd: 1 osds: 1 up, 1 in
data:
pools: 3 pools, 72 pgs
objects: 24 objects, 35 KiB
usage: 1.1 GiB used, 837 GiB / 838 GiB avail
pgs: 72 active+clean
I created a user with this properties:
[client.foo]
key = AQA4d5xdlAklBxAA+Q5T+b3HLAxj2kRKzXUOSA==
caps mds = "allow r"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=mycephfs"
And when i try run this command:
sudo mount -t fuse.ceph conf=/etc/ceph/ceph.conf /mnt/cephfs/
this happens:
mount: /mnt/cephfs: wrong fs type, bad option, bad superblock on conf=/etc/ceph/ceph.conf, missing codepage or helper program, or other error.
or
when i try run this command:
sudo mount.ceph mon1:6789:/ /mnt/cephfs/
this happens:
mount error 110 = Connection timed out
or
when i try run this command:
sudo ceph-fuse -n client.foo /mnt/cephfs/
this happens:
ceph-fuse[64711]: starting ceph client
2019-10-21 16:21:17.329932 7f58cedbb500 -1 init, newargv = 0x55a6c11f0340 newargc=9
and indifinite pending. I can't see "starting fuse".
.
Where is my fault? Which way i should follow?
The syntax of your commands is incorrect.
You can mount the CephFS using
mount -t ceph mon1:6789:/ /mnt/ceph -o name=foo,secretfile=/path/to/keyring/file
There are many options you can use for the mount that can be found in the mount.ceph Documentation
I want to install neutron server on different Nodes. In my environment there will be 3 provider networks name provider1, provider2 and provider3 with respectively. All of them will be flat network. In my system, I want each neutron server manages different provider networks (neutron1 only controls provider1, neutron2 controls provider2 and neutron3 controls provider3). VMs will have internal networks (overlay network) and use Virtual Routers to access provider networks. The interface mapping on neutron servers are as given below:
Neutron 1
Bond 0 : Management + overlay
Bond 1 : use for provider1
Neutron 2
Bond 0 : Management + overlay
Bond 1 : use for provider2
Neutron 3
Bond 0 : Management + overlay
Bond 1 : use for provider3
Virtual router(VR) is randomly scheduled across multiple OpenStack Networking nodes. My question is how I can deploy VR on specific neutron node (like VR which has GW address from provider1 will deploy on neutron1) ? or I will create high available VR, in this case VR will deploy all neutron servers. How can I select the active virtual router in this case?
I thought the DVR(Distributed Virtual Router) is helpful for your case.
I describe some differences between DVR and non-DVR based on VM access routes.
The DVR is generated Virtual Router at each compute node that has VMs to decrease overloads of Network node and SPOF.
Differences based on how to route.
VMs running node | subnet | using router at DVR | non-DVR
---------------------------------------------------------------------------------------------------------------------------------
all on the same node | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
all across multiple nodes | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
Difference when using Floating IPs. (but accessing from external to internal (SNAT) is not HA, just one node can routing it as of Ocata.)
DVR | non-DVR
-------------------------------------------------------
each DVR has each Floating IP | Just Network node only
As following configuration steps were based just a simple pattern, you need to refer the official tutorials for adopting your system.
Prerequisite: all compute nodes have installed l3, dhcp, metadata, openvswitch agents.
Enable the DVR at all compute nodes.
# vim /etc/neutron/neutron.conf
[DEFAULT]
...snip...
router_distributed = True
...snip...
Adding the l2population driver at controller node.
# /vim/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
...snip...
mechanism_drivers = openvswitch,l2population
...snip...
Configure the SNAT router on the specified compute node.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr_snat
...snip...
Configure the agent mode to DVR on the remaining compute nodes.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr
...snip..
Edit openvswitch config on all compute nodes.
# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
...snip...
l2_population = True
enable_distributed_routing = True
...snip...
Restart for chages to take effect.
On controller node.
# systemctl restart neutron-server
On all compute nodes.
# systemctl restart neutron-l3-agent neutron-openvswitch-agent
I hope this will help you.