Can this algorithm achieve eventual consistency? - networking

I have an assumption but I don't know how to prove or falsify it.Please comment if you know:
There are M interconnected nodes in a peer-to-peer network. Initially, each node numbers itself with any integer from 1-N, N<=M.
Each node counts the most occurrences of neighboring nodes, including itself, and modifies its number to that value.
If all the numbers occur equally, a non-self number is randomly selected and changed to that number.
Each node notifies its neighbors when initialization is complete or when the numbering is updated.
Q:
Can this rule achieve final agreement on the numbering of all nodes?
If so, what is the expected time to reach agreement?

Related

Finding optimal order of all nodes to be visited in a graph

The following problem comes from geography, but I don't kown of any GIS method to solve it. I think it's solution can be found with graph analysis, but I need some guidance to think in the right direction.
There is a geographical area, say a state. It is subdivided in several quadrants, which are subdivided further, and once again. So its a tree structure with the state as root, and 3 levels of child nodes, each parent having 4 childs. But from the perspective of the underlying process its more like a completed graph, since in theory a node is directly reachable from each other node.
The subdivisions reflect map sheet boundaries at different mapscales. Each mapsheet has to reviewed by a topographer in a time span dependend on the complexity of the map contents.
While reviewing the map, the underlying digital data is locked in the database. And as the objects have topological relationships with objects of neighboring map sheet (eg. roads crossing the map boundaries), all 8 surrounding map sheets are locked also.
The question is, what is the optimal order in which the leafs (on the lowest level) should be visited to satisfy following requirements:
each node has to be visited
we do not deal with travel times but with the timespan a worker spent at each node (map)
the time spent at a node is different
while the worker is at a node, all adjacent nodes cannot be visited; this is true also for other workers too; they cannot work on a map side by side with a map already being processed
if a node has been visited, other nodes having the same parent should be prefered as next node; this is true for all levels of parents
Finally for a given number of nodes/maps and workers we need an ordered series of nodes, each worker visites to minimize the overall time, and the time for each parent also.
After designing the solution the real work begins. We will recognize, that the actual work may need more or less time, than expected. Therefore it is necessary to replay the solution up to a current state, and design a new solution with slightly different conditions, leading to another order of nodes.
Has somebody an idea which data structure and which algorithm to use to find a solution for such kind of problem?
Not havig a ready made algorithm, but may be the following helps devising one:
Your exakt topologie is not cler. I assume from the other remarks,
you are targeting a regular structure. In your case a 4x4 square.
Given the restriction that working on a node blocks any adjacient node can be used to identify a starting condition for the algorithm:
Put a worker to one corner of the total are and then put others
at ditance 2 from this (first in x direction and as soon as the side is "filled" with y direction . This will occupy all (x,y) nodes (x,y in 0,2,..,2n where 2n <= size of grid)
With a 4x4 area this will allow a maximum of 4 workers, and will position a worker per child node of each 2 level grid node)
from this let each worker process (x,y),(x+1),(y+1),(x+1,y). This are the 4 nodes of a small square.
If a worker is done but can not proceed to the next planned node, you may advance it to the next free node from the schedule.
The more workers you will have, the higher the risk for contention will be. If you have any estimates on the expected wokload per node,
then you may prefer starting with the most expensive ones and arrange the processing sequence to continue with the ones that have the highest total expected costs.

How to compute the average(or sum) of node values in a network?

Consider a network(graph) of N nodes and each of them is holding a value, how to design a program/algorithm (for each node) that allows each node to compute the average(or sum) of all the node values in the network?
Assumptions are:
Direct communication between nodes is constrained by the graph topology, which is not a complete graph. Any other assumptions, if necessary for your algorithm, is allowable. The weakest one I assume is that there's a loop in the graph that contains all the nodes.
N is finite.
N is suffiently large such that you can't store all the values and then compute its average (or sum). For the same reason, you can't "remember" whose value you've received (thus you can't just redistributing values you've received and add those you've not seen to the buffer and get a result).
(The Tags may not be right since I don't know which field this kind of problems are in, if it's some kind of a general problem.)
That is an interesting question, here some assumptions I've made, before I present a partial solution:
The graph is connected (in case of a directed graph, strongly connected)
The nodes only communicate with their direct neighbours
It is possible to hold and send the sum of all numbers, this means the sum either won't exceed long or you have a data structure sufficiently large, which it won't exceed
I'd go with depth first search. Node N0 would initiate the algorithm and send it's value + the count to the first neighbour (N0.1). N0.1 would add it's own value + count and forward the message to the next neighbour (N0.1.1). In case the message comes back to either N0 or N0.1 they just forward it to another neighbour of theirs. (N0.2 or N0.1.2).
The problem now is to know, when to terminate the algorithm. Preferably you want to terminate it as soon as you've reached all nodes, and afterwards just broadcast the final message. In case you know how many nodes there are in the graph, just keep on forwarding it to the next node, until every node will be reached eventually. The last node will know that is had been reached (it can compare the count variable with the number of nodes in the graph) and broadcast the message.
If you don't know how many nodes there are, and it's and undirected graph, than it will be just depth first implementation in a graph. This means, if N0.1 gets a message from anyone else than N0.1.1 it will just bounce the message back, as you can't send messages to the parent when performing depth first search. If it is a directed graph and you don't know the number of nodes, well than you either come up with a mathematical model to prove when the algorithm has finished, or you learn the number of nodes.
I've found a paper here, proposing a gossip based algorithm to count the number of nodes in a dynamic network: https://gnunet.org/sites/default/files/Gossipico.pdf maybe that will help. Maybe you can even use it to sum up the nodes.

Visiting graph edges with independent paths

Given a directed graph with multiple start nodes and multiple end nodes, I need to form paths that visit every reachable edge, but I cannot visit any edge (or vertex) more than once during a single pass. [This is to electrically test every connection in a network by sending signals from start to end nodes, but I cannot allow paths to short together.]
Because I cannot re-visit edges during a single pass:
I can safely ignore the cycles in the graph.
I know each path I form will block other paths.
Consequently, I cannot visit every reachable edge in one pass, so multiple passes are necessary.
From context, I know that the minimum number of passes will be the maximum number of edges entering any vertex. Once I finish a given pass, I am free to re-visit edges that were visited in previous passes, but never-visited edges are the ones that I most want to visit.
I would like to visit "many" edges per pass, so that I can reduce total the number of passes, but I do not strictly need to minimize the number of passes.
Any suggestions on algorithms to accomplish this? It sounds a little like the route inspection problem, except that my graph is directed.
It is not clear from the question whether you have one or many start points and one or many end points. For simplicity let me assume "one-to-many" network. Then your requirement (not visit any edge or vertex more then once) means you actually generate a spanning tree of your graph with the given root.
A simple but not 100% solution that comes to mind is the following:
Assign some initial weights to the edges and apply random spanning tree algorithm. Then decrease the weight (actually, relative probability) of visited edges. It is very likely all edges will be visited.
In the case of "many-to-many" connection you can play with different starting points. If some sources are not connected to some sinks the algorithm would throw an exception. If this is not what you inspect, you can run regular DFS first to collect all reacheable vertices into some set; then you can use this set as a filter to form a boost::filtered_graph.

Who can explain the 'Replication' in dynamo paper?

In dynamo paper : http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf
The Replication section said:
To account for node failures, preference list contains more than N
nodes.
I want to know why ? and does this 'node' mean virtual node ?
It is for increasing Dynamo availability. If the top N nodes in the preference list are good, the other nodes will not be used. But if all the N nodes are unavailable, the other nodes will be used. For write operation, this is called hinted handoff.
The diagram makes sense both for physical nodes and virtual nodes.
I also don't understand the part you're talking about.
Background:
My understanding of the paper is that, since Dynamo's default replication factor is 3, each node N is responsible for circle ranging from N-3 to N (while also being the coordinator to ring range N-1 to N).
That explains why:
node B holds keys from F to B
node C holds keys from G to C
node D holds keys from A to D
And since range A-B falls within all those ranges, nodes B, C and D are the ones that have that range of key hashes.
The paper states:
The section 4.3 Replication:
To address this, the preference list for a key is constructed by skipping positions in the ring to ensure that the list contains only distinct physical nodes.
How can preference list contain more than N nodes if it is constructed by skipping virtual ones?
IMHO they should have stated something like this:
To account for node failures, ring range N-3 to N may contain more than N nodes, N physical nodes plus x virtual nodes.
Distributed DBMS Dyanmo DB falls in that class which sacrifices Consistency. Refer to the image below:
So, the system is inconsistent even though it is highly available. Because network partitions are a given in Distributed Systems you cannot not pick Partition Tolerance.
Addressing your questions:
To account for node failures, preference list contains more than N nodes. I want to know why?
One fact of Large Scale Distributed Systems is that in a system of thousands of nodes, failure of a nodes is a norm.
You are bound to have a few nodes failing in such a big system. You don't treat it as an exceptional condition. You prepare for such situations. How do you prepare?
For Data: You simply replicate your data on multiple nodes.
For Execution: You perform the same execution on multiple nodes. This is called speculative execution. As soon as you get the first result from the multiple executions you ran, you cancel the other executions.
That's the answer right there - you replicate your data to prepare for the case when node(s) may fail.
To account for node failures, preference list contains more than N nodes. Does this 'node' mean virtual node?
I wanted to ensure that I always have access to my house. So I copied my house's keys and gave them to another family member of mine. This guy put those keys in a safe in our house. Now when we all go out, I'm under the illusion that we have other set of keys so in case I lose mine, we can still get in the house. But... those keys are in the house itself. Losing my keys simply means I lose the access to my house. This is what would happen if we replicate the data on virtual nodes instead of physical nodes.
A virtual node is not a separate physical node and so when the real node on which this virtual node has been mapped to will fail, the virtual node will go away as well.
This 'node' cannot mean virtual node if the aim is high availability, which is the aim in Dynamo DB.

Pregel BSP: Difference between partitioning and assignment of user input by master to worker

The pregel paper mentions:
a) The Pregel library divides a graph into partitions, each consisting
of a set of vertices and all of those vertices’ outgoing edges...The
master determines how many partitions the graph will have, and assigns
one or more partitions to each worker machine.
and
b) The master assigns a portion of the user’s input to each worker. The
input is treated as a set of records, each of which contains an
arbitrary number of vertices and edges. The division of inputs is
orthogonal to the partitioning of the graph itself, and is typically
based on file boundaries.
I have two questions here:
1) In b), how is the master assigning a "portion of the user's input to each worker" different from "assigns one or more partitions to each worker machine". Do they have different functions?
I thought we have to figure out our partitions and then feed one or more partition to a worker machine and that is all. What am I missing?
2) If the division of inputs is solely based on file boundaries, does that mean vertices of a partition can reside on different machines? (because two vertices of a partition may reside on different files and hence be processed by different worker machines).
Question 1:
The master assigning the user input to each worker is the same to assigning one or more partitions to each worker machine.
The user input will be a graph. This graph will be split into several partitions. These partitions will be split between workers.
The worker is where the partitions are going to be processed. They may contain one or more partitions. The partitions contain vertices. The partition in the entity selects active vertices and run their superstep computations.
Question 2:
No. All vertices that are inside a partition are inside the same worker. If a vertex is going to be transferred to another machine (and thus another worker), it will change to another partition.

Resources