DR setup for MariaDB Galera Clusters - mariadb

I have two MariaDB Galera Cluster with 3 nodes.
Cluster 1 : MDB-01,MDB-02,MDB-03
Cluster 2 : MDBDR-01,MDBDR-02,MDBDR-03
These two clusters are in two different data centers which are in two geographical regions.
Cluster 1 is PRODUCTION cluster and Cluster 2 is DR cluster
Asynchronous replication using GTID has been setup between MDB-01 to MDBDR-01
as per given configuration in the link :
http://www.severalnines.com/blog/deploy-asynchronous-replication-slave-mariadb-galera-cluster-gtid-clustercontrol
(Link is asynchronous replication between MariaDB Galera Cluster to Stand alone MariaDB instance.
However I have setup same configuration for asynchronous replication between MariaDB Galera Cluster to MariaDB Galera Cluster)
I am able to switch from current slave MDBDR-01 => MDB-01 to MDBDR-01 => MDB-02 with below command :
CHANGE MASTER TO master_host='MDB-02'
However I am getting challenge how to point MDBDR-02 => MDB-01 in case of MDBDR-01 is down.
Could you please provide inputs to achieve pointing MDBDR-02 => MDB-01 or MDBDR-03 => MDB-01.

One thing you need to understand, that article briefly mentions, is that each MariaDB's GTID implementation can cause problems in this situation. Since each node maintains its own list of GTIDs and galera transactions do not have their own id, it is possible that the same GTID does not point to the same place on each server (see this article).
Due to that problem, I wouldn't attempt what you're doing without MariaDB 10.1. MariaDB 10.1.8 was just released and is the first GA release of the 10.1 line. 10.1 changes the GTID implementation so galera transactions use their own server_id (set via a config variable). You can then filter replication on the slaves to only replicate the galera id.
To switch to a different slave server, you will need to get that last GTID executed on the old slave. The gtid_slave_pos is stored in mysql.gtid_slave_pos, but mysql.* tables are not replicated. I'm not completely sure and I don't have a way of testing if the original GTID of a transaction is passed to the other slave galera nodes (i.e. if the master cluster's galera server_id is 1 and the slave cluster's galera server_id is 2 and MDBDR-01 gets a slave event with GTID 1-1-123, will MDBDR-02 log it as 1-1-123 or 1-2-456). I'm guessing that it doesn't since the new GTID implementation should change the server_id, but you may be able to verify this. Since you probably can't get the last executed master GTID from a different slave galera node, you will probably need to get the GTID from the old slave which may not be possible unless you gracefully shut down the old slave. You may need to find the GTID from the last executed transaction in the binlog on the new slave and try to match that to a transaction in the master's binlog. Also, if you're not using sync_binlog = 1, the binlog is not reliable and might be a bit behind.
Since each galera slave node probably doesn't know about the executed GTIDs and can't skip previous GTID events, you may also have to play with SQL_SLAVE_SKIP_COUNTER to get to the correct position if the GTID you found is behind.
When you get the GTID (or a guess of it) you will then set up replication on the new slave the same way you set it up on the original slave. The following commands should do it:
SET GLOBAL gtid_slave_pos = "{Last Executed GTID}";
CHANGE MASTER TO master_host="{Master Address}", master_port={Master Port}, master_user="{Replication User}", master_password={Replication Password}, master_use_gtid=slave_pos;
START SLAVE;
You should also disable replication on the old slave before restarting it so the missed events don't get replicated twice.
Until the executed slave GTID is replicated through galera, which might never happen, failover like this will be a messy process.

Related

Can i use master-slave replication on a cluster?

I'm using MariaDB 5.5.60 on both zabbix-servers which have a clustering solution between them for zabbix-server service.
Can I use master-slave solution for a cluster?
If I have node1 and node2, and they both have MariaDB on them,
node1 is the master and node2 is the slave.
If node1 is down, can the slave keep the new information written to the database? or I need to make some sync to make slave master and vice versa?
Is there such a solution of master-slave or do I have a better solution?
"Master-Slave" involves continually updating the Slave from the Master. If the Master crashes, there is a small chance of something not having made it to the Slave, but otherwise, the Slave is 'always' identical to the Master.
"Failover" mostly involves redirecting traffic to the Slave and making it writable.
Then there is the hassle of setting up a new Slave to the new Master, etc.
But... You have added confusion to the question by using the word "cluster". That is probably referring to another replication technology that is more robust. Does it also say "Galera" or "Group Replication" or "Innodb Cluster"? Probably not, since it is the rather old (5.5).
Please study what Zabbix provides (I don't know what it does.) and study "Replication" in MySQL/MariaDB documentations.

Mariadb Galera cluster does not come up after killing the mysql process

I have a Mariadb Galera cluster with 2 nodes and it is up and running.
Before moving to production, I want to make sure that if a node crashes abruptly, It should come up on its own.
I tried using systemd "restart", but after killing the mysql process the mariadb service does not come up, so, is there any tool or method, that I can use to automate bringing up the nodes after crashes?
Galera clusters needs to have quorum (3 nodes).
In order to avoid a split-brain condition, the minimum recommended number of nodes in a cluster is 3. Blocking state transfer is yet another reason to require a minimum of 3 nodes in order to enjoy service availability in case one of the members fails and needs to be restarted. While two of the members will be engaged in state transfer, the remaining member(s) will be able to keep on serving client requests.
You can read more here.

How Galera Cluster guarantees consistency?

I'm searching for a high-available SQL solution! One of the articles that I read was about "virtually synchronized" in Galera Cluster: https://www.percona.com/blog/2012/11/20/understanding-multi-node-writing-conflict-metrics-in-percona-xtradb-cluster-and-galera/
He says
When the writeset is actually applied on a given node, any locking
conflicts it detects with open (not-yet-committed) transactions on
that node cause that open transaction to get rolled back.
and
Writesets being applied by replication threads always win
What will happen if the WriteSet conflicts with a committed transaction?
He also says:
Writesets are then “certified” on every node (in order).
How does Galera Cluster make WriteSets ordered over a cluster? Is there any hidden master node who make WriteSets ordered; something like Zookeeper? or what?
This is for the second question (about how Galera orders the writesets).
Galera implements Extended Virtual Synchrony (EVS) based on the Totem protocol. The Totem protocol implements a form of token passing, where only the node with the token is allowed to send out new requests (as I understand it). So the writes are ordered since only one node at a time has the token.
For the academic background, you can look at these:
The Totem Single-Ring Ordering and Membership Protocol
The database state machine and group communication issues
(This Answer does not directly tackle your Question, but it may give you confidence that Galera is 'good'.)
In Galera (PXC, etc), there are two general times when a transaction can fail.
On the node where the transaction is being run, the actions are compared to what is currently running on the same node. If there is a conflict, either one of the transactions is stalled (think innodb_lock_wait_timeout) or is deadlocked (and rolled back).
At COMMIT time, info is sent to all the other nodes; they check your transaction against anything on the node or pending (in gcache). If there is a conflict, a message is sent back saying that there would be trouble. So, the originating node has the COMMIT fail. For this reason, you must check for errors even on the COMMIT statement.
As with single-node systems, a deadlock is usually resolved by replaying the entire transaction.
In the case of autocommit, there is a small, configurable, number of retries, after which the statement will fail. So, again, check for errors. However, since a retry has already been tried, you may want to abort the program.
Currently (in my opinion) Galera, with at least 3 nodes in at least 3 different physical locations, is the best available HA solution for MySQL. It can effectively survive any single-point-of-failure. (Group Replication / InnoDB Cluster, from Oracle, is coming soon, and is very promising.)
One thing to note is that the "critical read" problem has a solution in Galera, but you have to take action. See wsrep_sync_wait. (As of this writing, InnoDB Cluster has no solution.)
See http://mysql.rjweb.org/doc.php/galera for tips (some of which are included above) on coding differences when moving to PXC/Galera.

MariaDB Galera cluster: are replicate-do-db filters applied before or after data sent?

I would like to synchronize only some databases on a cluster, with replicate-do-db.
→ If I use the Galera cluster, are all data sent over the network, or are nodes smart enough to only fetch their specific databases?
On "classic" master/slave MariaDB replication, filters are made by the slave, causing network charge for nothing if you don't replicate that database. You have to configure a blackhole proxy to filter binary logs to avoid this (setup example), but the administration after is not really easy. So it would be perfect with a cluster if I can perform the same thing :)
binlog_... are performed in the sending (Master) node.
replicate_... are performed in the receiving (Slave) node.
Is this filtered server part of the cluster? If so, you are destroying much of the beauty of Galera.
On the other hand, if this is a Slave hanging off one of the Galera nodes and the Slave does not participate in the "cluster", this is a reasonable architecture.

Connet two apps to MariaDB Multi Master database

Suppose that we have two application servers(app1 and app2) and also we setup multi master MariaDB clustering with two nodes(node1 and node2) without any HAProxy.Can we connect app1 to node1 and app2 to node2 and also both of app1 and app2 write to node1 and node2?
Does it cause any conflict?
Galera solves most of the problems that occur with Master-Master:
If one of Master-Master dies, now what? Galera recovers from any of its 3 nodes failing.
If you INSERT the same UNIQUE key value in more than one Master, M-M hangs; Galera complains to the last client to COMMIT.
If a node dies and recovers, the data is automatically repaired.
You can add a node without manually doing the dump, etc.
etc.
However, there are a few things that need to be done differently to when using Galera: Tips

Resources