WSREP: Failed to read 'ready ' from: wsrep_sst_xtrabackup --role 'joiner' --address - mariadb

When I attempt to join a node to a cluster using the xtrabackup sst option, I get the following error:
WSREP: Failed to read 'ready ' from: wsrep_sst_xtrabackup --role 'joiner' --address...
How can I fix this issue?

The solution was to run wsrep_sst_xtrabackup manually, which reported
Can't find nc in the path
Running
yum install nc
fixed the issue.

Related

brew install mariadb fails as system can not chown for auth_pam_tool

brew
brew install not work. this is the log.
% brew install mariadb
==> Downloading https://homebrew.bintray.com/bottles/mariadb-10.4.13.mojave.bottle.tar.gz
Already downloaded: /Users/shingo/Library/Caches/Homebrew/downloads/d56104142081a8230646ac3f245adf2414e515cd5f2aeeb0637614e9966e882c--mariadb-10.4.13.mojave.bottle.tar.gz
==> Pouring mariadb-10.4.13.mojave.bottle.tar.gz
==> /usr/local/Cellar/mariadb/10.4.13/bin/mysql_install_db --verbose --user=shingo --basedir=/usr/local/Cellar/mariadb/10.4.13 --datadir=/usr/local/var/mysql --tmpdir=/tmp
Last 15 lines from /Users/shingo/Library/Logs/Homebrew/mariadb/post_install.01.mysql_install_db:
shell> /usr/local/Cellar/mariadb/10.4.13/bin/mysql -u root mysql
mysql> show tables;
Try 'mysqld --help' if you have problems with paths. Using
--general-log gives you a log in /usr/local/var/mysql that may be helpful.
The latest information about mysql_install_db is available at
https://mariadb.com/kb/en/installing-system-tables-mysql_install_db
You can find the latest source at https://downloads.mariadb.org and
the maria-discuss email list at https://launchpad.net/~maria-discuss
Please check all of the above before submitting a bug report
at http://mariadb.org/jira
Warning: The post-install step did not complete successfully
You can try again using `brew postinstall mariadb`
==> Caveats
A "/etc/my.cnf" from another install may interfere with a Homebrew-built
server starting up correctly.
MySQL is configured to only allow connections from localhost by default
To start mariadb:
brew services start mariadb
Or, if you don't want/need a background service you can just run:
mysql.server start
==> Summary
🍺 /usr/local/Cellar/mariadb/10.4.13: 744 files, 169.9MB
I have tried to execute mysql_install_db
I have tried to execute mysql_install_db without brew. this is the log.
The brew displays only the last 15 lines, so I can't help it.
% /usr/local/Cellar/mariadb/10.4.13/bin/mysql_install_db --verbose --user=shingo --basedir=/usr/local/Cellar/mariadb/10.4.13 --datadir=/usr/local/var/mysql --tmpdir=/tmp
chown: /usr/local/Cellar/mariadb/10.4.13/lib/plugin/auth_pam_tool_dir/auth_pam_tool: Operation not permitted
Couldn't set an owner to '/usr/local/Cellar/mariadb/10.4.13/lib/plugin/auth_pam_tool_dir/auth_pam_tool'.
It must be root, the PAM authentication plugin doesn't work otherwise..
Installing MariaDB/MySQL system tables in '/usr/local/var/mysql' ...
2020-05-29 22:13:03 0 [Note] /usr/local/Cellar/mariadb/10.4.13/bin/mysqld (mysqld 10.4.13-MariaDB) starting as process 45440 ...
2020-05-29 22:13:03 0 [ERROR] /usr/local/Cellar/mariadb/10.4.13/bin/mysqld: option '--innodb-large-prefix' requires an argument
2020-05-29 22:13:03 0 [ERROR] Parsing options for plugin 'InnoDB' failed.
2020-05-29 22:13:03 0 [ERROR] /usr/local/Cellar/mariadb/10.4.13/bin/mysqld: unknown variable 'mysqlx-bind-address=127.0.0.1'
2020-05-29 22:13:03 0 [ERROR] Aborting
Installation of system tables failed! Examine the logs in
/usr/local/var/mysql for more information.
The problem could be conflicting information in an external
my.cnf files. You can ignore these by doing:
shell> /usr/local/Cellar/mariadb/10.4.13/bin/mysql_install_db --defaults-file=~/.my.cnf
You can also try to start the mysqld daemon with:
shell> /usr/local/Cellar/mariadb/10.4.13/bin/mysqld --skip-grant-tables --general-log &
and use the command line tool /usr/local/Cellar/mariadb/10.4.13/bin/mysql
to connect to the mysql database and look at the grant tables:
shell> /usr/local/Cellar/mariadb/10.4.13/bin/mysql -u root mysql
mysql> show tables;
Try 'mysqld --help' if you have problems with paths. Using
--general-log gives you a log in /usr/local/var/mysql that may be helpful.
The latest information about mysql_install_db is available at
https://mariadb.com/kb/en/installing-system-tables-mysql_install_db
You can find the latest source at https://downloads.mariadb.org and
the maria-discuss email list at https://launchpad.net/~maria-discuss
Please check all of the above before submitting a bug report
at http://mariadb.org/jira
I noticed that the system can not chown for auth_pam_tool because Operation not permitted.
auth_pam_tool permission
this is my permission the directory.
% ls -l /usr/local/Cellar/mariadb/10.4.13/lib/plugin/auth_pam_tool_dir/auth_pam_tool
-r-xr-xr-x 1 shingo staff 13608 5 10 04:28 /usr/local/Cellar/mariadb/10.4.13/lib/plugin/auth_pam_tool_dir/auth_pam_tool
How to fix Operation not permitted?
Or is there any other reason why it cannot be installed?
Self resolved.
Anyway I gave it try start
Even though the installation was not successful, anyway I gave it try % mysql.server start.
A error log file was created.
A error log file was created by starting mysql server.
the error log shows:
2020-05-30 8:47:10 0 [Warning] InnoDB: innodb_open_files 300 should not be greater than the open_files_limit 256
2020-05-30 8:47:10 0 [ERROR] /usr/local/Cellar/mariadb/10.4.13/bin/mysqld: unknown variable 'mysqlx-bind-address=127.0.0.1'
2020-05-30 8:47:10 0 [ERROR] Aborting
An unknown value in the mysqlx-bind-address seems to be causing the error.
How to fix unknown variable
I found
the same error question. This question discussed a my.conf file.
~/.my.conf did not exist on my mac
/etc/my.conf did not exist on my mac
this question teach me the my.conf location.
Finally I found my.conf in /usr/local/etc/my.cnf.Certainly the settings for mysqlx-bind-address were written in my.conf.
So rm /usr/local/etc/my.cnf, then brew reinstall SUCCESS!.
The permission was irrelevant at all.
this work for me:
rm /opt/homebrew/etc/my.cnf

ICp 2.1.0.1: Installation failed with error TASK [master: Waiting for MariaDB service to start]

I am installing ICp 2.1.0.1 and I received an error at the TASK
[master: Waiting for MariaDB service to start] msg: The MariaDB
component failed to start.
After this msg the installation completed with failed status.
We are installing ICp with 3 Masters, 3 Proxies and 2 Workers. We have 1 IP for VIP master and 1 for VIP proxy.
I tried to install multiple times and all installations got the same error.
For prior issues with that error, the correct db admin password was not used. So check the db user and password to resolve issue.
Would you validate whether each master host was able to access port 3306 on the other hosts?
If you run with .. install -vv | tee -a install-log.txt, do you get additional details as well?
The error was solved by following the steps below.
Check whether kubelet is running:
Log in to your master node.
Run the following command to check kubelet status:
systemctl status kubelet
If kubelet is not running, run the following command to get the logs:
journalctl -u kubelet &> kubelet.log
We found the error in the kubelet.log log:
Error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
We found this troubleshoot in this link, and the solution at the ICP issue 4651.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/troubleshoot/etcd_fails.html
https://github.ibm.com/IBMPrivateCloud/roadmap/issues/4651

Could not find a version that satisfies the requirement pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)

I just followed the openstack rally quick start guide to create a tempest verifier with Rally v0.9.1 in an Openstack Ocata/stable deployment. The command failed:
(rally-15.1.2) root#infra1-utility-container-f31faeb0:~/.rally/verification# rally verify create-verifier --type tempest --name tempest-verifier
2017-05-21 07:53:13.410 11422 INFO rally.api [-] Creating verifier 'tempest-verifier'.
2017-05-21 07:53:13.528 11422 INFO rally.verification.manager [-] Cloning verifier repo from https://git.openstack.org/openstack/tempest.
2017-05-21 07:53:37.174 11422 INFO rally.verification.manager [-] Creating virtual environment. It may take a few minutes.
2017-05-21 07:53:42.323 11422 ERROR rally.verification.utils [-] Failed cmd: '['pip', 'install', '-e', './']'
2017-05-21 07:53:42.324 11422 ERROR rally.verification.utils [-] Error output: 'Obtaining file:///root/.rally/verification/verifier-091a49ab-1241-40a3-bc9b-531d7f091e37/repo
Collecting pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)
Could not find a version that satisfies the requirement pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178) (from versions: 1.10.0)
No matching distribution found for pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)
'
Command failed, please check log for more info
As the current version of pbr is 2.0.0, I'm not sure why pbr installation failed.
(rally-15.1.2) root#infra1-utility-container-f31faeb0:~/.rally/verification# pip freeze|grep pbr
pbr==2.0.0
The question is how to adjust the requirement checking for pbr? or is it possible to choose an older version of tempest?
Thanks.
It solved.
After uploading the two missing python packages: os_testr-0.8.2-py2-none-any.whl and testrepository-0.0.19.tar.gz into local repo, which is a lxc container had been created by openstack-ansible, the Tempest plugin was finally installed.

Unable to create MariaDB Galera Cluster

I have built an image based on mariadb:10.1 which basically adds a new cluster.conf but facing the following error on the second node after the first node started working successfully. Can somebody help me debug here?
Error log tail
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():162
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel 'test_cluster' at 'gcomm://172.17.0.2,172.17.0.3,172.17.0.4': -110 (Connection timed out)
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: gcs connect failed: Connection timed out
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: wsrep::connect(gcomm://172.17.0.2,172.17.0.3,172.17.0.4) failed: 7
2016-09-28 10:12:55 139799503415232 [ERROR] Aborting
MySQL init process failed.
Debugging steps taken
NOTE: Container IP addresses were ensured to be the same as shown.
To ensure networking between containers is working, tried creating another container which could login to the first container's mysql instance.
This is definitely not related to MYSQL_HOST
To see if the container was running out of memory, I used docker stats and saw that the failed container was using only a meagre 142MB all through its lifecycle until it failed, which is way lesser than the total memory it was allowed (~4GB).
I am using Docker for Mac, but tried running the same on a CentOS VirtualBox and gives the same results. Doesn't look like Docker on Mac has a problem.
Config
[mysqld]
user=mysql
binlog_format=ROW
bind-address=0.0.0.0
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M
innodb_file_per_table=1
innodb_doublewrite=1
query_cache_size=0
query_cache_type=0
wsrep_on=ON
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_sst_method=rsync
Steps to start containers
# bootstrap node
docker run --rm -e MYSQL_ROOT_PASSWORD=123 \
activatedgeek/mariadb:devel \
--wsrep-cluster-name=test_cluster \
--wsrep-cluster-address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4 \
--wsrep-new-cluster
# add node into cluster
docker run --rm -e MYSQL_ROOT_PASSWORD=123 \
activatedgeek/mariadb:devel \
--wsrep-cluster-name=test_cluster \
--wsrep-cluster-address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
# add node into cluster
docker run --rm -e MYSQL_ROOT_PASSWORD=123 \
activatedgeek/mariadb:devel \
--wsrep-cluster-name=test_cluster \
--wsrep-cluster-address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
This problem is caused due to the hanging init process. The configurations and CLI arguments above are correct. The only thing to be done before the init process starts is to create and empty mysql directory in the data directory (/var/lib/mysql by default). The must only be created on all nodes except the bootstrap node.
mkdir -p /var/lib/mysql/mysql
See sample MariaDB Cluster for usage which uses a custom MariaDB image and is a proof of concept for creating clusters.
I guess your containers should either expose the required ports:
-p 3306:3306 -p 4444:4444 -p 4567:4567 -p 4568:4568
or should be --link (ed) together.

Riak 1.3.1 will not start on lucid, Ec2 instance

I have installed riak (apt-get) on an EC2 instance, lucid, amd64 with libssl.
When running riak start I get:
Attempting to restart script through sudo -H -u riak
Riak failed to start within 15 seconds,
see the output of 'riak console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.
Running riak console:
Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.3.1/riak
-embedded -config /etc/riak/app.config
-pa /usr/lib/riak/lib/basho-patches
-args_file /etc/riak/vm.args -- console
Root: /usr/lib/riak
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:2:2] [async-threads:64] [kernel-poll:true]
/usr/lib/riak/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed.
Erlang has closed
{"Kernel pid terminated",application_controller,"{application_start_failure,riak_core, {shutdown,{riak_core_app,start,[normal,[]]}}}"}
Crash dump was written to: /var/log/riak/erl_crash.dump
Kernel pid terminated (application_controller) ({application_start_failure,riak_core, {shutdown,{riak_core_app,start,[normal,[]]}}})
The error logs:
2013-04-24 11:36:20.897 [error] <0.146.0> CRASH REPORT Process riak_core_handoff_listener with 1 neighbours exited with reason: bad return value: {error,eaddrinuse} in gen_server:init_it/6 line 332
2013-04-24 11:36:20.899 [error] <0.145.0> Supervisor riak_core_handoff_listener_sup had child riak_core_handoff_listener started with riak_core_handoff_listener:start_link() at undefined exit with reason bad return value: {error,eaddrinuse} in context start_error
2013-04-24 11:36:20.902 [error] <0.142.0> Supervisor riak_core_handoff_sup had child riak_core_handoff_listener_sup started with riak_core_handoff_listener_sup:start_link() at undefined exit with reason shutdown in context start_error
2013-04-24 11:36:20.903 [error] <0.130.0> Supervisor riak_core_sup had child riak_core_handoff_sup started with riak_core_handoff_sup:start_link() at undefined exit with reason shutdown in context start_error
I'm new to Riak and basically tried to run through the "Fast Track" docs.
None of the default core IP settings in the configs have been changed. They are still set to {http, [ {"127.0.0.1", 8098 } ]}, {handoff_port, 8099 }
Any help would be greatly appreciated.
I know this is old but there is some solid documentation about the errors in the crash.dump file on the Riak site.

Resources