Are messages dropped on RAFT? - raft

I'm reading about raft, but I got a bit confused when it comes to consensus after a network partition.
So, considering a cluster of 2 nodes, 1 Leader, 1 Follower.
Before partitioning, there where X messages written, successfully replicated, and then imagine that a network problem caused partitioning, so there are 2 partitions, A(ex-leader) and B(ex-follower), which are now both leaders (receiving writes):
before partition | Messages |x| Partition | Messages
Leader | 0 1 2 3 4 |x| Partition A | 5 6 7 8 9
Follower | 0 1 2 3 4 |x| Partition B | 5' 6' 7' 8' 9'
After the partition event, we've figured it out, what happens?
a) We elect 1 new leader and consider its log? (dropping messages of the new follower?
e.g:
0 1 2 3 4 5 6 7 8 9 (total of 10 messages, 5 dropped)
or even:
0 1 2 3 4 5' 6' 7' 8' 9' (total of 10 messages, 5 dropped)
(depending on which node got to be leader)
b) We elect a new leader and find a way to make consensus of all the messages?
0 1 2 3 4 5 5' 6 6' 7 7' 8 8' 9 9' (total of 15 messages, 0 dropped)
if b, is there any specific way of doing that? or it depends on client implementation? (e.g.: message timestamp...)

The leaders log is taken to be "the log" when the leader is elected and has successfully written its initial log entry for the term. However in your case the starting premise is not correct. In a cluster of 2 nodes, a node needs 2 votes to be leader, not 1. So given a network partition neither node will be leader.

Related

how to make select query for Infinite level of tree, using a recusive method

I want to make an SQL query that can select all the children trees that belongs to the chosen parent for example:
the foolowing picture will explain.
if I choose parent hot i must get{tea,green tea, lemon tea,reg tea,cofee,espresso,cappuchino,late}
if I choose "juice" i get also all children belongs for it {mango,orange,lemonade}
I think it should be a recusive select method that can call it self untill all levels of sub childrens are called.
(https://i.stack.imgur.com/h5FGc.png)
DRINK-hot-tea-green,reg,lemontea
| |_Coffe-Cappuchino,espresso,late
|
|____cold-shake-coktail,strawbery,banann
|__Juice-mango,orange,...
|__water-still,sparkling,flavoured,..
it should be a recursive select method that can call itself until all levels ofsub-childrenn are called.
the table can be
id name ref
1 drink 0
2 cold 1
3 hot 1
4 tea 3
5 coffe 3
6 shake 2
7 juice 2
8 water 2
9 espreso 5
10 capucino 5
11 late 5
12 mango 7
13 coktail 6
14 still 8
15 sparkling 8
table

Is there an efficient algorithm to create this type of schedule? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am creating a schedule for a sports league with several dozen teams. I already have all of the games in a set order and now I just need to assign one team to be the "home" team and one to be "away" for each game.
The problem has two constraints:
Each pair of teams must play an equal number of home and away
games against each other. For example, if team A and team B play 4
games, then 2 must be hosted by A and 2 by B. Assume that each pair
of teams plays an even number of games against each other.
No team should have more than three consecutive home games or three
consecutive away games at any point in the schedule.
I have been trying to use brute force in R to solve this problem but I can't get any of my code blocks to solve the issue in a timely fashion. Does anyone have any advice on how to deal with either (or both) of the above constraints algorithmically?
You need to do more research on simple scheduling.
There are a lot of references on line for these things.
Here are the basics for your application. Let's assume a league of 6 teams; the process is the same for any number.
Match 1: Simply write down the team numbers in order, in pairs, in a ring. Flatten he ring into two lines. Matches are upper (home) and lower(away).
1 2 3
6 5 4
Matches 2-5: Team 1 stays in place; the others rotate around the ring.
1 6 2
5 4 3
1 5 6
4 3 2
1 4 5
3 2 6
1 3 4
2 6 5
That's one full cycle. To balance the home-away schedule, simply invert the fixtures every other match:
1 2 3 5 4 3 1 5 6 3 2 6 1 3 4
6 5 4 1 6 2 4 3 2 1 4 5 2 6 5
There's your first full round. Simply replicate this, again switching home-away fixtures in alternate rounds. Thus, the second round would be:
6 5 4 1 6 2 4 3 2 1 4 5 2 6 5
1 2 3 5 4 3 1 5 6 3 2 6 1 3 4
Repeat this pair of rounds as many times as needed to get the length of schedule you need.
If you have an odd quantity of teams, simply declare one of the numbers to be the "bye" in the schedule. I find it easiest to follow if I use the non-rotating team -- team 1 in this example.
Note that this home-switching process guarantees that no team has three consecutive matches either home or away: they get two in a row when rounding the end of the row. However, even the two-in-a-row doesn't suffer at the end of the round: both of those teams break the streak in the first match of the next round.
Unfortunately, for an arbitrary existing schedule, you are stuck with a brute-force search with backtracking. You can employ some limits and heuristics, such as balancing partial home-away fixtures as the first option at each juncture. Still, the better approach is to make your original schedule correct by design.
There's also a slight problem that you cannot guarantee that your existing schedule will fulfill the given requirements. For instance, given the 8-team fixtures in this order:
1 2 3 4
5 6 7 8
1 2 5 6
3 4 7 8
1 3 5 7
2 4 6 8
It is not possible to avoid having at least two teams playing three consecutive home or away matches.

MPI_Scatter values with repetitions

For example I have 6 MPI nodes forming a 1D grid.
On the master process I have some values for the edges of the grid:
[1 2 3 4 5]
And I want to distribute these values to put each value to both nodes that are adjacent to the corresponding edge. That is, I want to get the following data distribution among the nodes:
1 | 1 2 | 2 3 | 3 4 | 4 5 | 5
What is the best way to perform this? Seems that this cannot be done with a single MPI_Scatter call.

What is the difference between a bank conflict and channel conflict on AMD hardware?

I am learning OpenCL programming and running some programs on AMD GPU. I referred the AMD OpenCL Programming guide to read about global memory optimization for GCN Architecture. I am not able to understand the difference between a bank conflict and a channel conflict.
Can someone explain me what is the difference between them?
Thanks in advance.
If two memory access requests are directed to the same controller, the hardware serializes the access. This is called a channel conflict. Which means, each of integrated memory controller circuits can serve to a single task at a time, if you happen to map any two tasks' address to access to same channel, they are served serially.
Similarly, if two memory access requests go to the same memory bank, hardware serializes the access. This is called a bank conflict. If there are multiple memory chips, then you should avoid using a stride of the special width of the hardware.
Example with 4 channels and 2 banks: (not a real world example since banks must be more than or equal to channels)
address 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
channel 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1
bank 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1
so you should not read like this:
address 1 3 5 7 9
channel 1 3 1 3 1 // %50 channel conflict
bank 1 1 1 1 1 //%100 bank conflict,serialized on bank level
nor this:
address 1 5 9 13
channel 1 1 1 1 // %100 channel conflict, serialized
bank 1 1 1 1 // %100 bank conflict, serialized
but this could be ok:
address 1 6 11 16
channel 1 2 3 4 // no conflict, %100 channel usage
bank 1 2 1 2 // no conflict, %100 bank usage
because the stride is not a multiple of channel nor bank widths.
Edit: if your algorithms are more of a local-storage optimized, then you should pay attention to local data store channel conflicts. On top of this, some cards can use constant memory as an independent channel source to speed up reading rates.
Edit: You can use multiple wavefronts to hide conflict-based latencies or you can use instruction level parallelism too.
Edit: Number of local data store channels are much faster and more numerous than global channels so optimizing for LDS (local data share) is very important so uniform-gathering on global channels then scattering on local channels shouldn't be as problematic as scattering on global channels and uniform-gathering on local channels.
http://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk/opencl-optimization-guide/#50401334_pgfId-472173
For an AMD APU with a decent mainboard, you should be able to select an n-way channel interleaving or n-way bank interleaving for your desire if your software is not alterable.

What is the correct name of this error correction method (it is similar to Hamming Code)

What is the correct name of this error correction method?
It is quite similar to Hamming Code, but much more simple. I also cannot find it in the literature any more. The only internet sources, I'm now able to find, which describes the method, are this:
http://www.mathcs.emory.edu/~cheung/Courses/455/Syllabus/2-physical/errors-Hamming.html
And the german-language Wikipedia.
http://de.wikipedia.org/w/index.php?title=Fehlerkorrekturverfahren
In the Wikipedia article, the method is called Hamming-ECC method. But I'm not 100% sure, this is correct.
Here is an example, which describes the way the method works.
Payload: 10011010
Step 1: Determine parity bit positions. Bits, which are powers of 2 (1, 2, 4, 8, 16, etc.) are parity bits:
Position: 1 2 3 4 5 6 7 8 9 10 11 12
Data to be transmitted: ? ? 1 ? 0 0 1 ? 1 0 1 0
Step 2: Calculate parity bit values. Each bit position in the transmission is assigned to a position number. In this example, the position number is a 4-digit number, because we have 4 parity bits. Calculate XOR of the values of those positions (in 4-digit format), where the payload is a 1 bit in the transmission:
0011 Position 3
0111 Position 7
1001 Position 9
XOR 1011 Position 11
--------------------
0110 = parity bit values
Step 3: Insert parity bit values into the transmission:
Position: 1 2 3 4 5 6 7 8 9 10 11 12
Data to be transmitted: 0 1 1 1 0 0 1 0 1 0 1 0
Is is quite simple to verify, if a received message was transmitted correctly and single-bit errors can be corrected. Here is an example. The receiver calulates XOR of the calculated and received payload bits where the value is a 1 bit. Is the result is 0, there the transmission is error-free. Otherwise the result contains the position of the bit with the wrong value.
Received message: 0001101100101101
Position: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Received data: 0 0 0 1 1 0 1 1 0 0 1 0 1 1 0 1
Parity bits: X X X X X
00101 Position 5
00111 Position 7
01011 Position 11
01101 Position 13
XOR 01110 Position 14
--------------------
01010 Parity bits calculated
XOR 00111 Parity bits received
--------------------
01101 => Bit 13 ist defective!
I hope, anybody here knows the correct name of the method.
Thanks for any help.
This looks like a complicated implementation of the Hamming(15,11) encoding & decoding algorithm.
Interleaving the parity bits with the information bits does not change the behaviour (or performance) of the code. Your description only uses 8 information bits, where the Hamming(15,11) corrects all single bit errors even with 11 information bits being transmitted.
Your description does not explain how the transmitted 12-bit message gets extended to a 16-bit message on the receive side.

Resources