eucalyptus-node-keygen.service is showing failed on node controller - eucalyptus

i have installed node controller on centos 7. And I am running command systemctl and it is showing that eucalyptus-node service is active and running but eucalyptus-node-keygen.service is failed. How do I fix this issue?

The eucalyptus-node-keygen.service generates keys that are used with instance migration. The service runs conditionally to generate the keys when required, if keys are present then they do not need to be generated.
# systemctl cat eucalyptus-node-keygen.service | grep Condition
ConditionPathExists=|!/etc/pki/libvirt/servercert.pem
#
# stat -t /etc/pki/libvirt/servercert.pem
/etc/pki/libvirt/servercert.pem 1298 8 81a4 0 0 fd00 833392 1 0 0 1582596904 1582596904 1582596904 0 4096 system_u:object_r:cert_t:s0
so typically this service will show "start condition failed" which is not an error, and no action is required.

Related

Bespoke affinity maps (process bindings) in mpich

I am implementing an application using MPICH (sudo apt get mpich) on Linux (Ubuntu).
My current solution looks like this:
HYDRA_TOPO_DEBUG=1 mpiexec.hydra -n 3 -bind-to core:1 MyApp
...
process 0 binding: 10001000
process 1 binding: 01000100
process 2 binding: 00100010
What I want, however, is assigning one process to 4 cores, while the other two are assigned to 2. I want an affinity map that looks like this:
process 0 binding: 11001100
process 1 binding: 00100010
process 2 binding: 00010001
Using SMPD on Windows, I was able to obtain the required result using sth like this:
mpiexec -n 1 -host localhost --bind-to core:2 MyApp : -n 2 -host localhost --bind-to core:1 MyApp
This however does not work with Hydra. I read every manual by now and would be happy regarding any help - even if its another hydra manual that I did not read yet. Cheers!
The "user" keyword can be used to assign logical cores manually.
Hence, one can write:
HYDRA_TOPO_DEBUG=1 mpiexec.hydra -n 3 -bind-to user:0+1+4+5,2+6,3+7 MyApp
Then, I obtain:
process 0 binding: 11001100
process 1 binding: 00100010
process 2 binding: 00010001

Setting repmgr witness node on Debian

I am trying to set up repmgr version 5 on Debian with PostgtrSql 11.
Seems like the documentation is more oriented towards centos/RHEL.
When I am trying to setup the witnes node to start the repmgr daemon, I get an error without any idea where to look for for seeing what is the cause of the error.
This is my repmgr.conf file:
node_id=3
node_name='PG-Node-Witness'
conninfo='host=10.97.7.140 user=repmgr dbname=repmgr connect_timeout=2'
data_directory='/var/lib/postgresql/11/main'
failover='automatic'
promote_command='/usr/bin/repmgr standby promote -f /etc/repmgr.conf --log-to-file'
follow_command='/usr/bin/repmgr standby follow -f /etc/repmgr.conf --log-to-file --upstream-node-id=%n'
priority=60
monitor_interval_secs=2
connection_check_type='ping'
reconnect_attempts=6
reconnect_interval=8
primary_visibility_consensus=true
standby_disconnect_on_failover=true
repmgrd_service_start_command='sudo /etc/init.d/repmgrd start' #??????
repmgrd_service_stop_command='sudo //etc/init.d/repmgrd stop'#??????
service_start_command='sudo /usr/bin/systemctl start postgresql#11-main.service'
service_stop_command='sudo /usr/bin/systemctl stop postgresql#11-main.service'
service_restart_command='sudo /usr/bin/systemctl restart postgresql#11-main.service'
service_reload_command='sudo /usr/bin/systemctl relaod postgresql#11-main.service'
monitoring_history=yes
log_status_interval=60
register is OK:
repmgr -f /etc/repmgr.conf witness register -h 10.97.7.97
INFO: connecting to witness node "PG-Node-Witness" (ID: 3)
INFO: connecting to primary node
NOTICE: attempting to install extension "repmgr"
NOTICE: "repmgr" extension successfully installed
INFO: witness registration complete
NOTICE: witness node "PG-Node-Witness" (ID: 3) successfully registered
repmgr daemon dry-run OK too:
$repmgr -f /etc/repmgr.conf daemon start --dry-run
INFO: prerequisites for starting repmgrd met
DETAIL: following command would be executed:
sudo /usr/bin/systemctl start postg...#11-main.service
I setup /etc/default/repmgrd with:
REPMGRD_ENABLED=yes
and
REPMGRD_CONF="/etc/repmgr.conf"
But still get error when trying to run the daemon start:
$ repmgr -f /etc/repmgr.conf daemon start
I get:
NOTICE: executing: "sudo /etc/init.d/repmgrd start"
ERROR: repmgrd does not appear to have started after 15 seconds
HINT: use "repmgr service status" to confirm that repmgrd was successfully started
It is recommended to run repmgrd as a systemd service,
According to the docs (for debian) you may first need to configure /etc/default/repmgrd,
My configuration looks like this:
# default settings for repmgrd. This file is source by /bin/sh from
# /etc/init.d/repmgrd
# disable repmgrd by default so it won't get started upon installation
# valid values: yes/no
REPMGRD_ENABLED=yes
# configuration file (required)
REPMGRD_CONF="/etc/repmgr/12/repmgr.conf"
# additional options
REPMGRD_OPTS="--daemonize=false"
# user to run repmgrd as
REPMGRD_USER=postgres
# repmgrd binary
REPMGRD_BIN=/bin/repmgrd
# pid file
REPMGRD_PIDFILE=/var/run/repmgrd.pid
Secondly, I would revisit sudoers (visudo) in order to check whether the non-root user can execute sudo /etc/init.d/repmgrd start.
Further, the user who runs repmgr commands should be able to write logs depending on your configuration.
Apparently the correct command to start the repmgr daemon is:
repmgrd -f /etc/prepmgr.conf

I can't mount cephfs to my computer. How can i solve this problem?

I have a cephfs and I need to mount this file system.
I have two pools cephfs_data and cephfs_meta.
ceph -s output is:
cluster:
id: 9f3e7f80-4515-4b5f-92f0-4eb49f3cbf44
health: HEALTH_OK
services:
mon: 2 daemons, quorum mon1,osd0
mgr: osd0(active), standbys: mon1
mds: mycephfs-1/1/1 up {0=mon1=up:active}
osd: 1 osds: 1 up, 1 in
data:
pools: 3 pools, 72 pgs
objects: 24 objects, 35 KiB
usage: 1.1 GiB used, 837 GiB / 838 GiB avail
pgs: 72 active+clean
I created a user with this properties:
[client.foo]
key = AQA4d5xdlAklBxAA+Q5T+b3HLAxj2kRKzXUOSA==
caps mds = "allow r"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=mycephfs"
And when i try run this command:
sudo mount -t fuse.ceph conf=/etc/ceph/ceph.conf /mnt/cephfs/
this happens:
mount: /mnt/cephfs: wrong fs type, bad option, bad superblock on conf=/etc/ceph/ceph.conf, missing codepage or helper program, or other error.
or
when i try run this command:
sudo mount.ceph mon1:6789:/ /mnt/cephfs/
this happens:
mount error 110 = Connection timed out
or
when i try run this command:
sudo ceph-fuse -n client.foo /mnt/cephfs/
this happens:
ceph-fuse[64711]: starting ceph client
2019-10-21 16:21:17.329932 7f58cedbb500 -1 init, newargv = 0x55a6c11f0340 newargc=9
and indifinite pending. I can't see "starting fuse".
.
Where is my fault? Which way i should follow?
The syntax of your commands is incorrect.
You can mount the CephFS using
mount -t ceph mon1:6789:/ /mnt/ceph -o name=foo,secretfile=/path/to/keyring/file
There are many options you can use for the mount that can be found in the mount.ceph Documentation

Mirantis Openstack Fuel unable to provision nodes with VIRT role

I need to reproduce MOS 9.2 installation.
So, previously MOS 9.2 was installed on 7 baremetal servers with such roles:
2 - compute
3 - virt (Looks like 3 controllers were deployed as a virtuals)
2 - ceph
I've successfully installed fuel master, updated it to 9.2, created environment and now I need to add nodes with appropriate roles, but when I'm trying to assign role VIRT to 3 physical servers I'm getting an error:
# fuel2 env add nodes -e 6 -n 9 -r virt
400 Client Error: Bad Request for url: http://MYIP:8000/api/v1/clusters /6/assigment/ (Role 'virt' restrictions mismatch: )
When I'm trying to define 3 nodes:
# fuel2 env add nodes -e 6 -n 9,10,11 -r virt
fuel2 env add nodes: error: argument -n/--nodes: invalid int value: '9,10,11'
Also I didn't find role 'virt' in Fuel web UI
I fixed this issue by editing /etc/nailgun/settings.yaml:
FEATURE_GROUPS:
- "advanced"

ipvsadm -L -n suddenly showing no active connections

I have a very odd problem in a proxy cluster of four Squid proxies:
One of the machine is the master. The mater is running ldirectord which is checking the availability of all four machines, distributing new client connections.
All over a sudden, after years of operation I'm encountering this problem:
1) The machine serving the master role is not being assigned new connections, old connections are served until a new proxy is assigned to the clients.
2) The other machines are still processing requests, taking over the clients from the master (so far, so good)
3) "ipvsadm -L -n" shows ever-decreasing ActiveConn and InActConn values.
Once I migrate the master role to another machine, "ipvsadm -L -n" is showing lots of active and inactive connections, until after about an hour the same thing happens on the new master.
Datapoint: This happened again this afternoon, and now "ipvsadm -L -n" shows:
TCP 141.42.1.215:8080 wlc persistent 1800
-> 141.42.1.216:8080 Route 1 98 0
-> 141.42.1.217:8080 Route 1 135 0
-> 141.42.1.218:8080 Route 1 1 0
-> 141.42.1.219:8080 Route 1 2 0
No change in the numbers quite some time now.
Some more stats (ipvsadm -L --stats -n):
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 141.42.1.215:8080 1990351 87945600 0 13781M 0
-> 141.42.1.216:8080 561980 21850870 0 2828M 0
-> 141.42.1.217:8080 467499 23407969 0 3960M 0
-> 141.42.1.218:8080 439794 19364749 0 2659M 0
-> 141.42.1.219:8080 521378 23340673 0 4335M 0
Value for "Conns" is constant now for all realservers and the virtual server now. Traffic is still flowing (InPkts increasing).
I examined the output of "ipvsadm -L -n -c" and found:
25 FIN_WAIT
534 NONE
977 ESTABLISHED
Then I waited a minute and got:
21 FIN_WAIT
515 NONE
939 ESTABLISHED
It turns out that a local bird installation was injecting router for the IP of the virtual server and thus taking precedence over ARP.

Resources