During peer fetch command for joining new Orgs peer to channel getting error? - fetch

bash-5.0# peer channel fetch 0 newchannel.pb -o orderer.example.com:7050 --tls --cafile $ORDERER_TLS_CA -c newchannel
2022-06-13 05:20:09.443 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2022-06-13 05:20:09.445 UTC [cli.common] readBlock -> INFO 002 Expect block, but got status: &{FORBIDDEN}
While fetching 0th block for joining new org2 peer in channel , getting this error

Related

MariaDB Galera SST using mysqldump

Background:
I have a 3x server Mariadb Galera cluster that is happily working with mariabackup SST method, but recently have dumped a lot of legacy data out of tables (was 370GB+ now down to 75GB storage).
In order to shrink the database size to avoid such large SST requirements my understanding is mysqldump/restore could be used to achieve this. As such my thought process was to drop one of the servers from the cluster, and bring it back in with mysqldump SST method to shrink the disk usage. After this, I was then planning to force SST on the other nodes with the shrunken server as donor with mariabackup SST method again.
If my logic is flawed on the above please feel free to pull me up.
Problem:
After dropping one node and switching SST method to mysqldump, the server came up and made contact with the other nodes, but SST would fail.
SST Donor as Node 2, I get below logs on that node;
2023-02-16 21:02:40 9 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
/usr//bin/wsrep_sst_mysqldump: line 128: [: -gt: unary operator expected
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
2023-02-16 21:02:40 9 [ERROR] WSREP: Process completed with error: wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687374620' --gtid-domain-id '0' --mysqld-args --basedir=/usr: 1 (Operation not permitted)
2023-02-16 21:02:40 9 [ERROR] WSREP: Try 1/3: 'wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687374620' --gtid-domain-id '0' --mysqld-args --basedir=/usr' failed: 1 (Operation not permitted)
2023-02-16 21:02:41 0 [Note] WSREP: (a897af72, 'tcp://0.0.0.0:4567') turning message relay requesting off
SST Donor as Node 3, I get below logs on that node;
2023-02-16 21:24:01 8 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
mysqldump: Couldn't execute 'show events': Access denied for user 'backupuser'#'localhost' to database 'main_db' (1044)
ERROR 1064 (42000) at line 24: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'SST failed to complete' at line 1
2023-02-16 21:24:02 8 [ERROR] WSREP: Process completed with error: wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr: 1 (Operation not permitted)
2023-02-16 21:24:02 8 [ERROR] WSREP: Try 1/3: 'wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr' failed: 1 (Operation not permitted)
mysqldump: Couldn't execute 'show events': Access denied for user 'backupuser'#'localhost' to database 'main_db' (1044)
ERROR 1064 (42000) at line 24: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'SST failed to complete' at line 1
2023-02-16 21:24:04 8 [ERROR] WSREP: Process completed with error: wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr: 1 (Operation not permitted)
2023-02-16 21:24:04 8 [ERROR] WSREP: Try 2/3: 'wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr' failed: 1 (Operation not permitted)
For identification in the above;
192.168.1.100 is Node 1
192.168.2.100 is Node 2
192.168.3.100 is Node 3 (Though it's IP isn't listed in above)
Server version: 10.3.32-MariaDB-log MariaDB Server
I have confirmed that the backup user account and password in the "wsrep_sst_auth" parameter can be used from any node to connect to any other node via console (including itself) and even went as far as to drop a new database into Node 1 and explicitly add that user with full permissions before I tried to join it to the cluster again with the same results.
Node 1 has been rolled back to mariabackup SST method and successfully rejoined the cluster, but I would like to reattempt the above to shrink the disk usage if anyone is able to provide some guidance on why I might be seeing the above errors and additional options to try and resolve.

Error on Starting MySQL Cluster 8.0 Data Node on Ubuntu 22.04 LTS

When I start the data nodeid 1 (10.1.1.103) of MySQL Cluster 8.0 on Ubuntu 22.04 LTS I am getting the following error:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO -- Angel allocated nodeid: 2
When I start data nodeid 2 (10.1.1.105) I get the following error:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 11:10:04 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 11:10:04 [ndbd] ERROR -- Failed to allocate nodeid, error: 'Error: Could not alloc node id at 10.1.1.102:1186: Connection done from wrong host ip 10.1.1.105.'
The management node log file reports (on /var/lib/mysql-cluster/ndb_1_cluster.log):
2023-01-02 11:28:47 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
What is the relevance of failing to open: /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory?
Why is data node on 10.1.1.105 unable to allocate a nodeid?
I initially installed a single Management Node on 10.1.1.102:
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
dpkg -i mysql-cluster-community-management-server_8.0.31-1ubuntu22.04_amd64.deb
mkdir /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini
The configuration set up on config.ini:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
[ndb_mgmd]
# Management process options:
hostname=10.1.1.102 # Hostname of the manager
datadir=/var/lib/mysql-cluster # Directory for the log files
[ndbd]
hostname=10.1.1.103 # Hostname/IP of the first data node
NodeId=2 # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files
[ndbd]
hostname=10.1.1.105 # Hostname/IP of the second data node
NodeId=3 # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files
[mysqld]
# SQL node options:
hostname=10.1.1.102 # In our case the MySQL server/client is on the same Droplet as the cluster manager
I then started and killed the running server and created a systemd file for Cluster manager:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
pkill -f ndb_mgmd
vi /etc/systemd/system/ndb_mgmd.service
Adding the following configuration:
[Unit]
Description=MySQL NDB Cluster Management Server
After=network.target auditd.service
[Service]
Type=forking
ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
I then reloaded the systemd daemon to apply the changes, started and enabled the Cluster Manager and checked its active status:
systemctl daemon-reload
systemctl start ndb_mgmd
systemctl enable ndb_mgmd
Here is the status of the Cluster Manager:
# systemctl status ndb_mgmd
● ndb_mgmd.service - MySQL NDB Cluster Management Server
Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2023-01-01 08:25:07 CST; 27min ago
Main PID: 320972 (ndb_mgmd)
Tasks: 12 (limit: 9273)
Memory: 2.5M
CPU: 35.467s
CGroup: /system.slice/ndb_mgmd.service
└─320972 /usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Jan 01 08:25:07 nuc systemd[1]: Starting MySQL NDB Cluster Management Server...
Jan 01 08:25:07 nuc ndb_mgmd[320971]: MySQL Cluster Management Server mysql-8.0.31 ndb-8.0.31
Jan 01 08:25:07 nuc systemd[1]: Started MySQL NDB Cluster Management Server.
I then set up a data node on 10.1.1.103, installing dependencies, downloading the data node and setting up its config:
apt update && apt -y install libclass-methodmaker-perl
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
dpkg -i mysql-cluster-community-data-node_8.0.31-1ubuntu22.04_amd64.deb
vi /etc/my.cnf
I entered the address of the Cluster Management Node in the configuration:
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=10.1.1.102 # location of cluster manager
I then created a data directory and started the node:
mkdir -p /usr/local/mysql/data
ndbd
This is when I got the "Failed to open" error result on data nodeid 1 (102.1.1.103):
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO -- Angel allocated nodeid: 2
UPDATED (2023-01-02)
Thank you #MauritzSundell. I corrected the (private) IP addresses above and no longer got:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
ERROR: Unable to connect with connect string: nodeid=0,10.1.1.2:1186
Retrying every 5 seconds. Attempts left: 12 11 10 9 8 7 6 5 4 3 2 1, failed.
2023-01-01 14:41:57 [ndbd] ERROR -- Could not connect to management server, error: ''
Also #MauritzSundell, in order to use the ndbmtd process rather than the ndbd process, does any alteration need to be made to any of the configuration files (e.g. /etc/systemd/system/ndb_mgmd.service)?
What is the appropriate reference/tutorial documentation for MySQL Cluster 8.0? Is it MySQL Cluster "MySQL NDB Cluster 8.0" on:
https://downloads.mysql.com/docs/mysql-cluster-excerpt-8.0-en.pdf
Or is it "MySQL InnoDB Cluster" on:
https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-introduction.html
Not sure I understand the difference.

Unable to init microstack (snap version openstack) due to nginx complaining. any suggestions?

Hi community out there.
Am trying to install openstack on an ubuntu 20.04 server but init fails by nginx complaining
user01#metropolis:/var/opt$ sudo microstack.init --auto
[sudo] Passwort für user01:
2020-08-27 12:07:41,204 - microstack_init - INFO - Configuring networking ...
2020-08-27 12:07:53,190 - microstack_init - INFO - Opening horizon dashboard up to *
2020-08-27 12:07:56,342 - microstack_init - INFO - Waiting for RabbitMQ to start ...
Waiting for 10.20.20.1:5672
2020-08-27 12:08:46,544 - microstack_init - INFO - RabbitMQ started!
2020-08-27 12:08:46,544 - microstack_init - INFO - Configuring RabbitMQ ...
2020-08-27 12:08:50,572 - microstack_init - INFO - RabbitMQ Configured!
2020-08-27 12:08:50,629 - microstack_init - INFO - Waiting for MySQL server to start ...
Waiting for 10.20.20.1:3306
2020-08-27 12:08:50,643 - microstack_init - INFO - Mysql server started! Creating databases ...
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'neutron'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1287, "Using GRANT statement to modify existing user's properties other than privileges is deprecated and will be removed in future release. Use ALTER USER statement for this operation.")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'nova'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'nova_api'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'nova_cell0'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'cinder'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'glance'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'keystone'; database exists")
result = self._query(query)
Traceback (most recent call last):
File "/snap/microstack/206/bin/microstack_init", line 33, in <module>
sys.exit(load_entry_point('microstack-init==0.0.1', 'console_scripts', 'microstack_init')())
File "/snap/microstack/206/lib/python3.6/site-packages/init/main.py", line 54, in wrapper
return func(*args, **kwargs)
File "/snap/microstack/206/lib/python3.6/site-packages/init/main.py", line 138, in init
question.ask()
File "/snap/microstack/206/lib/python3.6/site-packages/init/questions/question.py", line 210, in ask
self.yes(awr)
File "/snap/microstack/206/lib/python3.6/site-packages/init/questions/__init__.py", line 358, in yes
check('snapctl', 'start', 'microstack.nginx')
File "/snap/microstack/206/lib/python3.6/site-packages/init/shell.py", line 68, in check
raise subprocess.CalledProcessError(proc.returncode, " ".join(args))
subprocess.CalledProcessError: Command 'snapctl start microstack.nginx' returned non-zero exit status 1.
user01#metropolis:/var/opt$ netstat | grep :80
tcp 0 0 metropolis:55384 192.168.178.75:8009 VERBUNDEN
tcp 0 0 metropolis:48726 192.168.178.129:8009 VERBUNDEN
tcp 0 0 metropolis:59164 192.168.178.101:8009 VERBUNDEN
tcp 0 0 metropolis:50820 172.30.33.5:8086 VERBUNDEN
user01#metropolis:/var/opt$ netstat | grep :443
tcp 0 0 metropolis:44324 192.168.178.12:http TIME_WAIT
tcp 0 0 metropolis:44376 192.168.178.12:http TIME_WAIT
user01#metropolis:/var/opt$
Can someone please explain, why nginx complains or where to review logs? Am new to snap.
Any other suggestions also welcome? No Apache or Nginx running
I solved the same problem but with microstack.ovs.vswitchd
To solve the problem I had to install openvswitch-switch-dpdk.
Your solution may be to install nginx or reinstall it..
I encountered the same problem, I tried to check if apache2 was running on the same port but in my ubuntu-ec2 instance, I haven't installed apache2 so technically I shouldn't have this error.
So, I look for if any other service was running using the network tools
Then I found out that the datadog-agent service was using the same port, I removed the service and then tried to initialize microstack using microstack init --auto --control and it worked. After the microstack was up and running, I then installed a datadog-agent to monitor my ec2 instance.
If you still have this error, look for ports using the network tools and try to remove those services and re-install after you initialize the microstack.
My Ec2 Details:
Ubuntu server 20.04 LTS
RAM: 8GB
Storage: 20Gb
VCPU: 2
t2.large

Getting error while redeploying nodes

I am still using Corda 1.0 version. when i try to redeploy nodes with existing data, getting below error while start-up but able to access the nodes . If i clear the data and redeploy the nodes, i didn't face these error message.
Logs can be found in :
C:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\kotlin-
source\build\nodes\xxxxxxxx\logs
Database connection url is : jdbc:h2:tcp://xxxxxxxxx/node
E 18:38:46+0530 [main] core.client.createConnection - AMQ214016: Failed to
create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty
all-4.1.9.Final.jar:4.1.9.Final]
Incoming connection address : xxxxxxxxxxxx
Listening on port : 10014
RPC service listening on port : 10015
Loaded CorDapps : corda-finance-1.0.0, kotlin-
source-0.1, corda-core-1.0.0
Node for "xxxxxxxxxxx" started up and registered in 213.08 sec
Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut
down the node.
Wed May 23 18:39:20 IST 2018>>> E 18:39:24+0530 [Thread-6 (ActiveMQ-server-
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImp
l$3#4a532271)] core.client.createConnection - AMQ214016: Failed to create
netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty-
all-4.1.9.Final.jar:4.1.9.Final]
This looks like the Artemis failed to connect to the node which means the node fails to start.
You should look at the log and if there are any other previous Corda node started which occupy the node.
If there are any legacy Corda nodes that have not been killed, try ps -ef |grep java to see if there is any other java still alive. Especially look for the port number and check if they are overlapped

Upload Stem Cell Error - Keystone connection timed out

I am getting a connection timeout error while uploading the stemcell to bosh director. I am using bosh cli v2. The following is myerror logs.
> bosh -e sdp-bosh-env upload-stemcell https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12 --fix
Using environment '10.82.73.8' as client 'admin'
Task 13
Task 13 | 05:02:40 | Update stemcell: Downloading remote stemcell (00:00:51)
Task 13 | 05:03:31 | Update stemcell: Extracting stemcell archive (00:00:03)
Task 13 | 05:03:34 | Update stemcell: Verifying stemcell manifest (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Checking if this stemcell already exists (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Uploading stemcell bosh-openstack-kvm-ubuntu-trusty-go_agent/3541.12 to the cloud (00:10:41)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 | 05:14:16 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 Started Sat Apr 7 05:02:40 UTC 2018
Task 13 Finished Sat Apr 7 05:14:16 UTC 2018
Task 13 Duration 00:11:36
Task 13 error
Uploading remote stemcell 'https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12':
Expected task '13' to succeed but state is 'error'
Exit code 1
Check OpenStack's security group for the BOSH Director machine.
SG should contain ALLOW IPv4 to 0.0.0.0/0, if it doesn't add at least egress TCP to 10.81.102.5 on port 5000.
Check connection using ssh:
bbl ssh --director
nc -tvn 10.81.102.5 5000
If it doesn't help check network/firewall configuration.
https://bosh.io/docs/uploading-stemcells/
https://github.com/cloudfoundry/bosh-bootloader/blob/master/terraform/openstack/templates/resources.tf

Resources