I am having problems trying to build a database solution for the experiment to ensure HA and performance(sharding).
Now, I have a spider node and two galera clusters (3 nodes in each cluster), as shown in the figure below, and this configuration works well in general cases.:
However, as far as I know, when the spider engine performs sharding, it must assign primary IP to distribute SQL statements to two nodes in different Galera clusters.
So my first question here is:
Q1): When the machine .12 shuts down due to destruction, how can I make .13 or .14(one of them) automatically replace .12?
The servers that spider engine know
Q2): Are there any open source tools (or technologies) that can help me deal with this situation? If so, please explain how it works. (Maybe MaxScale? But I never knew what it is and what it can do.)
Q3): The motivation for this experiment is as follows. An automated factory has many machines, and each machine generates some data that must be recorded during the production process (maybe hundreds or thousands of data per second) to observe the operation of the machine and make the quality of each batch of products the best.
So my question is: how about this architecture (Figure 1)? or please provides your suggestions.
You could use MaxScale in front of the Galera cluster to make the individual nodes appear like a combined cluster. This way Spider will be able to seamlessly access the shard even if one of the nodes fails. You can take a look at the MaxScale tutorial for instructions on how to configure it for a Galera cluster.
Something like this should work:
This of course has the same limitation that a single database node has: if the MaxScale server goes down, you'll have to switch to a different MaxScale for that cluster. The benefit of using MaxScale is that it is in some sense stateless which means it can be started and stopped almost instantly. A network load balancer (e.g. ELB) can already provide some form of protection from this problem.
Related
I am using AWS ElastiCache for Redis as the caching solution for my spring-boot application. I am using spring-boot-starter-data-redis and jedis client to connect with my cache.
Imagine that I am having my cache in cluster-mode-enabled and 3 shards with 2 nodes in each. I agree then the best way of doing it is using the configuration-endpoint. Alternatively, I can list all the endpoints of all nodes and let the job done.
However, even if I use a single node's endpoint from one of the shards, my caching solution works. That doesn't looks right to me. I feel even if it works, that might case problems in the cluster in long run. When there are all together 6 nodes partitioned into 3 shards but only using one node's endpoint. I have following questions.
Is using one node's endpoint create an imbalance in the cluster?
or
Is that handled automatically by the AWS ElastiCache for Redis?
If I use only one node's endpoint does that mean the other nodes will never being used?
Thank you!
To answer your questions;
Is using one node's endpoint create an imbalance in the cluster?
NO
Is that handled automatically by the AWS ElastiCache for Redis?
Somewhat
if I use only one node's endpoint does that mean the other nodes will never being used?
No. All nodes are being used.
This is how Cluster Mode Enabled works. In your case, you have 3 shards meaning all your slots (where key-value data is stored) are divided into 3 sub-clusters ie. shards.
This was explained in this answer as well - https://stackoverflow.com/a/72058580/6024431
So, essentially, your nodes are smart enough to re-direct your requests to the nodes that has the key-slot where your data needs to be stored. So, no imbalances. Redis handles the redirection for you.
Now, while using Node endpoints, you're going to be facing other problems.
Elasticache is running on cloud (which is essentially AWS Hardware). All hardware faces issues. You have 3 primaries (1p, 2p, 3p) and 3 (1r, 2r, 3r) replicas.
So, if a primary goes down due to hardware issue (lets say 1p), the replica will get promoted to become the new Primary for the cluster (1r).
Now the problem would be, your application is connected directly to 1p which has now been demoted to replica. So, all the WRITE operations will fail.
And you will have to change the application code manually whenever this happens.
Alternatively, if you were using configurational endpoint (or other cluster level endpoints) instead of node-endpoints, this issue would only be a blip to your application at most, perhaps for 1-2 seconds.
Cheers!
I have a Mariadb Galera cluster with 2 nodes and it is up and running.
Before moving to production, I want to make sure that if a node crashes abruptly, It should come up on its own.
I tried using systemd "restart", but after killing the mysql process the mariadb service does not come up, so, is there any tool or method, that I can use to automate bringing up the nodes after crashes?
Galera clusters needs to have quorum (3 nodes).
In order to avoid a split-brain condition, the minimum recommended number of nodes in a cluster is 3. Blocking state transfer is yet another reason to require a minimum of 3 nodes in order to enjoy service availability in case one of the members fails and needs to be restarted. While two of the members will be engaged in state transfer, the remaining member(s) will be able to keep on serving client requests.
You can read more here.
I want to preface this by saying that I've never taken a networking class but I'm learning on the job. Things like TCP/IP networking I have a pretty basic grasp of and if you think this will hinder my attempt at this let me know.
The task I have at hand is thus: I have an Open Stack network with a bunch of nodes that can communicate with each other, all running CentOS virtual machines (just for simplicity's sake) with applications running on top of them. The task is basically to find a way to monitor the ping of every node and report whenever some kind of message (probably through http) that reports what happened. The logic of checking for the actual latency problems isn't what I'm struggling with, its the best structure to complete this task.
I'm thinking of using Nagios and setting up a distributed monitoring system. Basically my plan is to instal nagios on each node after writing my plugin (unless its already offered or exists) and it would simply ping everything else in the network once its setup and the other nodes ping it once the fact that it has joined the network is detected. I'm not sure exactly how scalable this is because if the number of nodes increase a lot would having every node pinging every other node actually be a good thing? Could it actually end up being a lot of stress on the network?
Is this a bad idea? I know a more efficient solution would be something where as long as every node is being checked (not necessarily have to have every node connected to by every other node) is more efficient. Visualizing it as a graph with a couple of points, it would be a bidirectional graph with just one path connecting each point rather than every possible point having edges between each other. But I don't know if this is the level I should be thinking about it or not.
In short, what I'm asking is: How would one go about setting up a ping monitoring system between a bunch of Open Stack nodes?
Let me know if this question makes sense. Thanks.
Still not entirely sure what you're trying to accomplish with this setup, but the Nagios setup you're describing sounds messy and likely won't cover what you need. I'd look at building packetbeat into the provisioning of each of your hosts, and then shipping that data off to Elasticsearch. That way you can watch your actual application-level traffic and response times. https://www.elastic.co/products/beats/packetbeat
I havea business need related to a MariaDb instance that should work in a master-slave configuration with failover.
Looking at the documentation I have seen that is possible to conigure a multi- cluster-master (galera) or a simple master slave replica.
Any suggestion to configure master-slave + failover?
Many thanks in advance
Roberto
MySQL/MariaDB master-slave replication is great for handling read-heavy workloads. It's also used as a redundancy strategy to improve database availability, and as a backup strategy (i.e. take the snapshot/backup on the slave to avoid interrupting the master). If you don't need a multi-master solution with all the headaches that brings—even with MySQL Cluster or MariaDB Galera Cluster—it's a great option.
It takes some effort to configure. There are several guides out there with conflicting information (e.g. MySQL vs. MariaDB, positional vs. GTID) and several decision points that can affect your implementation (e.g. row vs. statement binlog formats, storage engine selection), and you might have to stitch various pieces together to form your final solution. I've had good luck with MariaDB 10.1 (GTID, row binlog format) and mixed MyISAM and InnoDB storage engines. I create one slave user on the master per slave, and I don't replicate the mysql database. YMMV. This guide is a good starting place, but it doesn't really cover GTID.
Failover is a whole separate ball of wax. You will need some kind of a reverse proxy (such as MaxScale or HAproxy) or floating IP address in front of your master that can adjust to master changes. (There might be a way to do this client-side, but I wouldn't recommend it.) Something has to monitor the health of the cluster, and when it comes time to promote a slave to the new master, there is a whole sequence of steps that have to be performed. MySQL provides a utility called mysqlfailover to facilitate this process, but as far as I know, it is not compatible with MariaDB. Instead, you might take a look at replication-manager, which seems to be MariaDB's Go-based answer to mysqlfailover. It appears to be a very sophisticated tool.
Master-Slave helps with failover, but does not provide it.
MariaDB Cluster (Galera) does provide failover for most cases, assuming you have 3 nodes.
What is the right approach to use to configure OpenSplice DDS to support 100,000 or more nodes?
Can I use a hierarchical naming scheme for partition names, so "headquarters.city.location_guid_xxx" would prevent packets from leaving a location, and "company.city*" would allow samples to align across a city, and so on? Or would all the nodes know about all these partitions just in case they wanted to publish to them?
The durability services will choose a master when it comes up. If one durability service is running on a Raspberry Pi in a remote location running over a 3G link what is to prevent it from trying becoming the master for "headquarters" and crashing?
I am experimenting with durability settings in a remote node such that I use location_guid_xxx but for the "headquarters" cloud server I use a Headquarters
On the remote client I might to do this:
<Merge scope="Headquarters" type="Ignore"/>
<Merge scope="location_guid_xxx" type="Merge"/>
so a location won't be master for the universe, but can a durability service within a location still be master for that location?
If I have 100,000 locations does this mean I have to have all of them listed in the "Merge scope" in the ospl.xml file located at headquarters? I would think this alone might limit the size of the network I can handle.
I am assuming that this product will handle this sort of Internet of Things scenario. Has anyone else tried it?
Considering the scale of your system I think you should seriously consider the use of Vortex-Cloud (see these slides http://slidesha.re/1qMVPrq). Vortex Cloud will allow you to better scale your system as well as deal with NAT/Firewall. Beside that, you'll be able to use TCP/IP to communicate from your Raspberry Pi to the cloud instance thus avoiding any problem related to NATs/Firewalls.
Before getting to your durability question, there is something else I'd like to point out. If you try to build a flat system with 100K nodes you'll generate quite a bit of discovery information. Beside generating some traffic, this will be taking memory on your end applications. If you use Vortex-Cloud, instead, we play tricks to limit the discovery information. To give you an example, if you have a data-write matching 100K data reader, when using Vortex-Cloud the data-writer would only match on end-point and thus reducing the discovery information by 100K times!!!
Finally, concerning your durability question, you could configure some durability service as alignee only. In that case they will never become master.
HTH.
A+