Mariadb Galera cluster does not come up after killing the mysql process - mariadb

I have a Mariadb Galera cluster with 2 nodes and it is up and running.
Before moving to production, I want to make sure that if a node crashes abruptly, It should come up on its own.
I tried using systemd "restart", but after killing the mysql process the mariadb service does not come up, so, is there any tool or method, that I can use to automate bringing up the nodes after crashes?

Galera clusters needs to have quorum (3 nodes).
In order to avoid a split-brain condition, the minimum recommended number of nodes in a cluster is 3. Blocking state transfer is yet another reason to require a minimum of 3 nodes in order to enjoy service availability in case one of the members fails and needs to be restarted. While two of the members will be engaged in state transfer, the remaining member(s) will be able to keep on serving client requests.
You can read more here.

Related

MariaDB Spider with Galera Clusters failover solutions

I am having problems trying to build a database solution for the experiment to ensure HA and performance(sharding).
Now, I have a spider node and two galera clusters (3 nodes in each cluster), as shown in the figure below, and this configuration works well in general cases.:
However, as far as I know, when the spider engine performs sharding, it must assign primary IP to distribute SQL statements to two nodes in different Galera clusters.
So my first question here is:
Q1): When the machine .12 shuts down due to destruction, how can I make .13 or .14(one of them) automatically replace .12?
The servers that spider engine know
Q2): Are there any open source tools (or technologies) that can help me deal with this situation? If so, please explain how it works. (Maybe MaxScale? But I never knew what it is and what it can do.)
Q3): The motivation for this experiment is as follows. An automated factory has many machines, and each machine generates some data that must be recorded during the production process (maybe hundreds or thousands of data per second) to observe the operation of the machine and make the quality of each batch of products the best.
So my question is: how about this architecture (Figure 1)? or please provides your suggestions.
You could use MaxScale in front of the Galera cluster to make the individual nodes appear like a combined cluster. This way Spider will be able to seamlessly access the shard even if one of the nodes fails. You can take a look at the MaxScale tutorial for instructions on how to configure it for a Galera cluster.
Something like this should work:
This of course has the same limitation that a single database node has: if the MaxScale server goes down, you'll have to switch to a different MaxScale for that cluster. The benefit of using MaxScale is that it is in some sense stateless which means it can be started and stopped almost instantly. A network load balancer (e.g. ELB) can already provide some form of protection from this problem.

How do i setup multiple mariadb in one single VM using Galera Cluster?

How do i setup multiple mariadb server in one single VM using Galera Cluster?
If configuration links are available please share me?
I have searched in galera website its says add the nodes into cluster and not the adding the multiple mariadb server into cluster
Analysis of having 3 Galera nodes in a single server,
All 3 in a single VM
One in each of 3 VMs
No VMs
Notes:
Galera provides crash protection -- if a node goes down due to hardware failure, the other nodes continue serving the database needs. Not so with all of them sharing the same server and disk(s).
By having multiple instances of MySQL (whether as Galera nodes or not), you can make better use of CPUs. But, since MySQL rarely needs all of the available CPU, I see no advantage in this configuration.
Each instance uses some RAM for static things -- 3 instances leads to 3 copies of such. Other things (eg, caches) scale with RAM size.
No advantage in networking.
(There may be other reasons why there is virtually no difference between a single instance and multiple instances.)

MariaDB master-master and master-slave replication at the same time

Currently I have 2 data centers and mariaDB master-master semi-sync replication will be employed to synchronize data between 2 sites.
In order to improve local availability, we are planned to deploy one more mariaDB in each site to form a master-slave replication. i.e. Cross-site replication is master-master replication, while local replication is master-slave replication
I would like to know if this topology makes sense and technically feasible to do.
Can mariaDB support mixed-mode of replication at the same time?
No you can't have partial asynchronous master slave and semi-sync on the same server.
I recommend moving to either Galera (3 sites recommended to alleviate split brain or devise an alternative resolution);
Or multi-master all-(server)-to-all-(other-servers) replication (without log-slave-updates).
A Master can have any number of Slaves; those slaves can be either local to the Master's datacenter, or remote. One of those "Slaves" can be another Master, thereby giving you "dual-Maser".
For Dual-Master, I recommend writing to only one of them (until a failover).
These are partial HA solutions:
* Replication
* Dual-Master
* Semi-sync
* Using only 2 datacenters
Galera (and soon, Group Replication) are better than any combination of the above. But for good HA, you need 3 geographically separate datacenters (think flood, tornados, etc)
I am not familiar with a restriction against async + semi-sync on the same server.
Be aware that every Slave must perform every write operation, so a Slave is not necessarily less busy than a Master. However, having more than one server for "reads" does spread out the read load.
For Galera, 3 nodes is recommended. 4 or 5 is OK; more than 5 may stress the network and the handshaking needed. Galera allows any number of Slaves hanging off each 'node'.

How Galera Cluster guarantees consistency?

I'm searching for a high-available SQL solution! One of the articles that I read was about "virtually synchronized" in Galera Cluster: https://www.percona.com/blog/2012/11/20/understanding-multi-node-writing-conflict-metrics-in-percona-xtradb-cluster-and-galera/
He says
When the writeset is actually applied on a given node, any locking
conflicts it detects with open (not-yet-committed) transactions on
that node cause that open transaction to get rolled back.
and
Writesets being applied by replication threads always win
What will happen if the WriteSet conflicts with a committed transaction?
He also says:
Writesets are then “certified” on every node (in order).
How does Galera Cluster make WriteSets ordered over a cluster? Is there any hidden master node who make WriteSets ordered; something like Zookeeper? or what?
This is for the second question (about how Galera orders the writesets).
Galera implements Extended Virtual Synchrony (EVS) based on the Totem protocol. The Totem protocol implements a form of token passing, where only the node with the token is allowed to send out new requests (as I understand it). So the writes are ordered since only one node at a time has the token.
For the academic background, you can look at these:
The Totem Single-Ring Ordering and Membership Protocol
The database state machine and group communication issues
(This Answer does not directly tackle your Question, but it may give you confidence that Galera is 'good'.)
In Galera (PXC, etc), there are two general times when a transaction can fail.
On the node where the transaction is being run, the actions are compared to what is currently running on the same node. If there is a conflict, either one of the transactions is stalled (think innodb_lock_wait_timeout) or is deadlocked (and rolled back).
At COMMIT time, info is sent to all the other nodes; they check your transaction against anything on the node or pending (in gcache). If there is a conflict, a message is sent back saying that there would be trouble. So, the originating node has the COMMIT fail. For this reason, you must check for errors even on the COMMIT statement.
As with single-node systems, a deadlock is usually resolved by replaying the entire transaction.
In the case of autocommit, there is a small, configurable, number of retries, after which the statement will fail. So, again, check for errors. However, since a retry has already been tried, you may want to abort the program.
Currently (in my opinion) Galera, with at least 3 nodes in at least 3 different physical locations, is the best available HA solution for MySQL. It can effectively survive any single-point-of-failure. (Group Replication / InnoDB Cluster, from Oracle, is coming soon, and is very promising.)
One thing to note is that the "critical read" problem has a solution in Galera, but you have to take action. See wsrep_sync_wait. (As of this writing, InnoDB Cluster has no solution.)
See http://mysql.rjweb.org/doc.php/galera for tips (some of which are included above) on coding differences when moving to PXC/Galera.

DR setup for MariaDB Galera Clusters

I have two MariaDB Galera Cluster with 3 nodes.
Cluster 1 : MDB-01,MDB-02,MDB-03
Cluster 2 : MDBDR-01,MDBDR-02,MDBDR-03
These two clusters are in two different data centers which are in two geographical regions.
Cluster 1 is PRODUCTION cluster and Cluster 2 is DR cluster
Asynchronous replication using GTID has been setup between MDB-01 to MDBDR-01
as per given configuration in the link :
http://www.severalnines.com/blog/deploy-asynchronous-replication-slave-mariadb-galera-cluster-gtid-clustercontrol
(Link is asynchronous replication between MariaDB Galera Cluster to Stand alone MariaDB instance.
However I have setup same configuration for asynchronous replication between MariaDB Galera Cluster to MariaDB Galera Cluster)
I am able to switch from current slave MDBDR-01 => MDB-01 to MDBDR-01 => MDB-02 with below command :
CHANGE MASTER TO master_host='MDB-02'
However I am getting challenge how to point MDBDR-02 => MDB-01 in case of MDBDR-01 is down.
Could you please provide inputs to achieve pointing MDBDR-02 => MDB-01 or MDBDR-03 => MDB-01.
One thing you need to understand, that article briefly mentions, is that each MariaDB's GTID implementation can cause problems in this situation. Since each node maintains its own list of GTIDs and galera transactions do not have their own id, it is possible that the same GTID does not point to the same place on each server (see this article).
Due to that problem, I wouldn't attempt what you're doing without MariaDB 10.1. MariaDB 10.1.8 was just released and is the first GA release of the 10.1 line. 10.1 changes the GTID implementation so galera transactions use their own server_id (set via a config variable). You can then filter replication on the slaves to only replicate the galera id.
To switch to a different slave server, you will need to get that last GTID executed on the old slave. The gtid_slave_pos is stored in mysql.gtid_slave_pos, but mysql.* tables are not replicated. I'm not completely sure and I don't have a way of testing if the original GTID of a transaction is passed to the other slave galera nodes (i.e. if the master cluster's galera server_id is 1 and the slave cluster's galera server_id is 2 and MDBDR-01 gets a slave event with GTID 1-1-123, will MDBDR-02 log it as 1-1-123 or 1-2-456). I'm guessing that it doesn't since the new GTID implementation should change the server_id, but you may be able to verify this. Since you probably can't get the last executed master GTID from a different slave galera node, you will probably need to get the GTID from the old slave which may not be possible unless you gracefully shut down the old slave. You may need to find the GTID from the last executed transaction in the binlog on the new slave and try to match that to a transaction in the master's binlog. Also, if you're not using sync_binlog = 1, the binlog is not reliable and might be a bit behind.
Since each galera slave node probably doesn't know about the executed GTIDs and can't skip previous GTID events, you may also have to play with SQL_SLAVE_SKIP_COUNTER to get to the correct position if the GTID you found is behind.
When you get the GTID (or a guess of it) you will then set up replication on the new slave the same way you set it up on the original slave. The following commands should do it:
SET GLOBAL gtid_slave_pos = "{Last Executed GTID}";
CHANGE MASTER TO master_host="{Master Address}", master_port={Master Port}, master_user="{Replication User}", master_password={Replication Password}, master_use_gtid=slave_pos;
START SLAVE;
You should also disable replication on the old slave before restarting it so the missed events don't get replicated twice.
Until the executed slave GTID is replicated through galera, which might never happen, failover like this will be a messy process.

Resources