Related
I am working on a parallel finite element method on moving meshes.
So I will need to call ParMETIS_V3_AdaptiveRepart from ParMetis to perform re-partitioning every time I re-mesh.
When successful, the function only generates the partitioning information, i.e. the elements on the processors.
However, the neighbors of a process are important as well, in order to construct the ghost layers of a sub-mesh.
So I am wondering if there is any efficient way to get the information about shared (overlapped) entities and neighbors, or does the ParMetis actually provide this information?
ParMetis is the function ParMETIS_V3_AdaptiveRepart does more or less the smae thing as ParMETIS_V3_PartKway
The ouput of ParMETIS_V3_PartKway is part "an array of size equal to the number of locally-stored vertices. Upon successful completion the
partition vector of the locally-stored vertices is written to this array."
It also returns the number of edges that are cut. (which is only a part of what you want).
But METIS does not provide a way to create the "ghost layers" as you elegantly say.
However since you have the created the graph you know how to find each neighbour for each element. And you can check if your neighbour element is in your current process's graph and if part[element]==part[neighbour_element]. If the neighbour element is not in your current process you will have to do a bit of MPI.
I am working with Bit-flipping decoding [Hard_decision] for one bit flip.
I have followed for below H_matrix :"For the bit-flipping algorithm the messages passed along the Tanner graph edges are also binary: a bit node sends a message declaring if it is a one or a zero, and each check node sends a message to each connected bit node, declaring what value the bit is based on the information available to the check node.
The check node determines that its parity-check equation is satisfied if the modulo-2 sum of the incoming bit values is zero. If the majority of the messages received by a bit node are different from its received value the bit node changes (flips) its current value. This process is repeated until all of the parity-check equations are satisfied.
Correct codeword = [10010101]
H matrix[4][8] = {{0,1,0,1,1,0,0,1},{1,1,1,0,0,1,0,0},{0,0,1,0,0,1,1,1}, {1,0,0,1,1,0,1,0}};
ReceivedCodeWord[8]={1,0,1,1,0,1,0,1}; //Error codeword
I need to get [10010101] but instead i am getting [10010001] for ReceivedCodeWord[8] = {1,0,1,1,0,1,0,1}.
But for other possible ReceivedCodeWord i am getting correct.
e.g. ReceivedCodeWord [00010101] i am getting correct codeword [10010101]
ReceivedCodeWord [11010101] i am getting correct codeword [10010101].
Doubt: why for ReceivedCodeWord {1,0,1,1,0,1,0,1} i am getting [10010001], its totally wrong. Please explain me.
Here's a link
Thanks
This is caused by the liner block code you used. Firstly, this code isn't a LDPC code, since the H martix doesn't satisfy the row-column constraint (e.g., the 3-th colmun and the 6-th column have 1s in two same poistions).
(1) When {1,0,1,1,0,1,0,1} is received , the check sums are {0,1,1,0}. If you use multiple bit flipping decoding algorithm, which means more than one bit can be flipped during one iteration, the 3-th variable node and 6-th variable node will be flipped together since both of them connect two unsatisfied check nodes, then the result will be {1,0,0,1,0,0,0,1} after first iteration. During the second iteration, the check sums are still {0,1,1,0}, so the 3-th variable node and 6-th variable node will be flipped again, this process will be recurrent in later iterations.
(2) The minimum Hamming distance of this code is 2, so this code can dectect 1 erros and correct 0.5 errors. Even you use single bit flipping decoding algorithm, it will has 50% probability to decode the received codeword {1,0,1,1,0,1,0,1} as another vaild codeword {1,0,1,1,0,0,0,1}.
I have a collection of 15M (Million) DAGs (directed acyclic graphs - directed hypercubes actually) that I would like to remove isomorphisms from. What is the common algorithm for this? Each graph is fairly small, a hybercube of dimension N where N is 3 to 6 (for now) resulting in graphs of 64 nodes each for N=6 case.
Using networkx and python, I implemented it like this which works for small sets like 300k (Thousand) just fine (runs in a few days time).
def isIsomorphicDuplicate(hcL, hc):
"""checks if hc is an isomorphism of any of the hc's in hcL
Returns True if hcL contains an isomorphism of hc
Returns False if it is not found"""
#for each cube in hcL, check if hc could be isomorphic
#if it could be isomorphic, then check if it is
#if it is isomorphic, then return True
#if all comparisons have been made already, then it is not an isomorphism and return False
for saved_hc in hcL:
if nx.faster_could_be_isomorphic(saved_hc, hc):
if nx.fast_could_be_isomorphic(saved_hc, hc):
if nx.is_isomorphic(saved_hc, hc):
return True
return False
One better way to do it would be to convert each graph to its canonical ordering, sort the collection, then remove the duplicates. This bypasses checking each of the 15M graphs in a binary is_isomophic() test, I believe the above implementation is something like O(N!N) (not taking isomorphic time into account) whereas a clean convert all to canonical ordering and sort should take O(N) for the conversion + O(log(N)N) for the search + O(N) for the removal of duplicates. O(N!N) >> O(log(N)N)
I found this paper on Canonical graph labeling, but it is very tersely described with mathematical equations, no pseudocode: "McKay's Canonical Graph Labeling Algorithm" - http://www.math.unl.edu/~aradcliffe1/Papers/Canonical.pdf
tldr: I have an impossibly large number of graphs to check via binary isomorphism checking. I believe the common way this is done is via canonical ordering. Do any packaged algorithms or published straightforward to implement algorithms (i.e. have pseudocode) exist?
Here is a breakdown of McKay ’ s Canonical Graph Labeling Algorithm, as presented in the paper by Hartke and Radcliffe [link to paper].
I should start by pointing out that an open source implementation is available here: nauty and Traces source code.
Ok, let's do this! Unfortunately this algorithm is heavy in graph theory, so we need some terms. First I will start by defining isomorphic and automorphic.
Isomorphism:
Two graphs are isomorphic if they are the same, except that the vertices are labelled differently. The following two graphs are isomorphic.
Automorphic:
Two graphs are automorphic if they are completely the same, including the vertex labeling. The following two graphs are automorphic. This seems trivial, but turns out to be important for technical reasons.
Graph Hashing:
The core idea of this whole thing is to have a way to hash a graph into a string, then for a given graph you compute the hash strings for all graphs which are isomorphic to it. The isomorphic hash string which is alphabetically (technically lexicographically) largest is called the "Canonical Hash", and the graph which produced it is called the "Canonical Isomorph", or "Canonical Labelling".
With this, to check if any two graphs are isomorphic you just need to check if their canonical isomporphs (or canonical labellings) are equal (ie are automorphs of each other). Wow jargon! Unfortuntately this is even more confusing without the jargon :-(
The hash function we are going to use is called i(G) for a graph G: build a binary string by looking at every pair of vertices in G (in order of vertex label) and put a "1" if there is an edge between those two vertices, a "0" if not. This way the j-th bit in i(G) represents the presense of absence of that edge in the graph.
McKay ’ s Canonical Graph Labeling Algorithm
The problem is that for a graph on n vertices, there are O( n! ) possible isomorphic hash strings based on how you label the vertices, and many many more if we have to compute the same string multiple times (ie automorphs). In general we have to compute every isomorph hash string in order to find the biggest one, there's no magic sort-cut. McKay's algorithm is a search algorithm to find this canonical isomoprh faster by pruning all the automorphs out of the search tree, forcing the vertices in the canonical isomoprh to be labelled in increasing degree order, and a few other tricks that reduce the number of isomorphs we have to hash.
(1) Sect 4: the first step of McKay's is to sort vertices according to degree, which prunes out the majority of isomoprhs to search, but is not guaranteed to be a unique ordering since there may be more than one vertex of a given degree. For example, the following graph has 6 vertices; verts {1,2,3} have degree 1, verts {4,5} have degree 2 and vert {6} has degree 3. It's partial ordering according to vertex degree is {1,2,3|4,5|6}.
(2) Sect 5: Impose artificial symmetry on the vertices which were not distinguished by vertex degree; basically we take one of the groups of vertices with the same degree, and in turn pick one at a time to come first in the total ordering (fig. 2 in the paper), so in our example above, the node {1,2,3|4,5|6} would have children { {1|2,3|4,5|6}, {2|1,3|4,5|6}}, {3|1,2|4,5|6}} } by expanding the group {1,2,3} and also children { {1,2,3|4|5|6}, {1,2,3|5|4|6} } by expanding the group {4,5}. This splitting can be done all the way down to the leaf nodes which are total orderings like {1|2|3|4|5|6} which describe a full isomorph of G. This allows us to to take the partial ordering by vertex degree from (1), {1,2,3|4,5|6}, and build a tree listing all candidates for the canonical isomorph -- which is already a WAY fewer than n! combinations since, for example, vertex 6 will never come first. Note that McKay evaluates the children in a depth-first way, starting with the smallest group first, this leads to a deeper but narrower tree which is better for online pruning in the next step. Also note that each total ordering leaf node may appear in more than one subtree, there's where the pruning comes in!
(3) Sect. 6: While searching the tree, look for automorphisms and use that to prune the tree. The math here is a bit above me, but I think the idea is that if you discover that two nodes in the tree are automorphisms of each other then you can safely prune one of their subtrees because you know that they will both yield the same leaf nodes.
I have only given a high-level description of McKay's, the paper goes into a lot more depth in the math, and building an implementation will require an understanding of this math. Hopefully I've given you enough context to either go back and re-read the paper, or read the source code of the implementation.
This is indeed an interesting problem.
I would approach it from the adjacency matrix angle. Two isomorphic graphs will have adjacency matrices where the rows / columns are in a different order. So my idea is to compute for each graph several matrix properties which are invariant to row/column swaps, off the top of my head:
numVerts, min, max, sum/mean, trace (probably not useful if there are no reflexive edges), norm, rank, min/max/mean column/row sums, min/max/mean column/row norm
and any pair of isomorphic graphs will be the same on all properties.
You could make a hash function which takes in a graph and spits out a hash string like
string hashstr = str(numVerts)+str(min)+str(max)+str(sum)+...
then sort all graphs by hash string and you only need to do full isomorphism checks for graphs which hash the same.
Given that you have 15 million graphs on 36 nodes, I'm assuming that you're dealing with weighted graphs, for unweighted undirected graphs this technique will be way less effective.
This is an interesting question which I do not have an answer for! Here is my two cents:
By 15M do you mean 15 MILLION undirected graphs? How big is each one? Any properties known about them (trees, planar, k-trees)?
Have you tried minimizing the number of checks by detecting false positives in advance? Something includes computing and comparing numbers such as vertices, edges degrees and degree sequences? In addition to other heuristics to test whether a given two graphs are NOT isomorphic. Also, check nauty. It may be your way to check them (and generate canonical ordering).
If all your graphs are hypercubes (like you said), then this is trivial: All hypercubes with the same dimension are isomorphic, hypercubes with different dimension aren't. So run through your collection in linear time and throw each graph in a bucket according to its number of nodes (for hypercubes: different dimension <=> different number of nodes) and be done with it.
since you mentioned that testing smaller groups of ~300k graphs can be checked for isomorphy I would try to split the 15M graphs into groups of ~300k nodes and run the test for isomorphy on each group
say: each graph Gi := VixEi (Vertices x Edges)
(1) create buckets of graphs such that the n-th bucket contains only graphs with |V|=n
(2) for each bucket created in (1) create subbuckets such that the (n,m)-th subbucket contains only graphs such that |V|=n and |E|=m
(3) if the groups are still too large, sort the nodes within each graph by their degrees (meaning the nr of edges connected to the node), create a vector from it and distribute the graphs by this vector
example for (3):
assume 4 nodes V = {v1, v2, v3, v4}. Let d(v) be v's degree with d(v1)=3, d(v2)=1, d(v3)=5, d(v4)=4, then find < := transitive hull ( { (v2,v1), (v1,v4), (v4,v3) } ) and create a vector depening on the degrees and the order which leaves you with
(1,3,4,5) = (d(v2), d(v1), d(v4), d(v3)) = d( {v2, v1, v4, v3} ) = d(<)
now you have divided the 15M graphs into buckets where each bucket has the following characteristics:
n nodes
m edges
each graph in the group has the same 'out-degree-vector'
I assume this to be fine grained enough if you are expecting not to find too many isomorphisms
cost so far: O(n) + O(n) + O(n*log(n))
(4) now, you can assume that members inside each bucket are likely to be isomophic. you can run your isomorphic-check on the bucket and only need to compare the currently tested graph against all representants you have already found within this bucket. by assumption there shouldn't be too many, so I assume this to be quite cheap.
at step 4 you also can happily distribute the computation to several compute nodes, which should really speed up the process
Maybe you can just use McKay's implementation? It is found here now: http://pallini.di.uniroma1.it/
You can convert your 15M graphs to the compact graph6 format (or sparse6) which nauty uses and then run the nauty tool labelg to generate the canonical labels (also in graph6 format).
For example - removing isomorphic graphs from a set of random graphs:
#gnp.py
import networkx as nx
for i in range(100000):
graph = nx.gnp_random_graph(10,0.1)
print nx.generate_graph6(graph,header=False)
[nauty25r9]$ python gnp.py > gnp.g6
[nauty25r9]$ cat gnp.g6 |./labelg |sort |uniq -c |wc -l
>A labelg
>Z 10000 graphs labelled from stdin to stdout in 0.05 sec.
710
Given two places in a region, how can I compute a path that is of certain distance (the distance is given as a parameter) which connects the two places. The resulting path should ideally be composed of roads that are visited no more than once.
I have learnt that this problem can be viewed as a graph problem where the roads in the map represent the edges of the graph and the intersection/junctions represent the nodes of the graph. From this the problem can be simplified to finding a path of certain length for any two nodes in the graph.
One of my approaches to this problem can be broken into two steps:
Finding the shortest path between the two nodes (using A* or a similar algorithm).
Extending/Expanding the shortest path that is returned from A* so that it's long enough.
Now, I am not exactly sure if this approach is worth pursuing or if a better approach exists? Also, I've not come across any methods that would allow me to expand the path - are there any specific techniques that can achieve this?
My idea is to use an ad-hoc heuristic function. The goal state should be reaching the destination with a path long L. (So if you reach the destination with a too short path the goal test is not satisfied and the algorithm will continue).
#goal state
if (state == dest and curr_length == required_length)
return path
For every other node the heuristic must give a value that is a combination of the expected distance from the destination and the required length that must still be travelled.
h(state, curr_length):
dist = manhattan_distance(state, goal)
#must use the abs instead if curr_length > required_length this value becomes
#negative and this node gets a wrong result
len = abs(required_length - curr_length)
return dist + len
I'm not sure if this heuristic is consistent or not, but you can try with something like this and check the results.
If this search work it should also be optimal.
For example, suppose I had a graph G that had all blue nodes and one red node. I also had a graph F that had all blue and one red node.
What is an algorithm that I can run to verify that these two graphs are isomorphic with respect to their colored nodes?
I have made a few attempts at trying to create a polynomial graph isomorphism algorithm, and while I have yet to create an algorithm that is proven to be polynomial for every case, one algorithm I came up with is particularly suited for this purpose. It's based on a DFA minimization algorithm (the specific algorithm is http://en.wikipedia.org/wiki/DFA_minimization#Hopcroft.27s_algorithm ; you may want to find a description from elsewhere, since Wikipedia's is difficult to follow).
The original algorithm was initialized by organizing the vertexes into distinct groups based on degree (one group for vertexes of degree 1, one for vertexes of degree 2, etc.). For your purposes, you will want to organize the vertexes into groups based upon both degree and label; this will ensure that no two nodes will be paired if they have different labels. Each graph should have its own structure containing such groups. Check the collection of groups for both graphs; there should be the same number of groups for the two graphs, and for each group in one graph, there should be a group in the other graph containing the same number of vertexes of the same degree and label. If this isn't the case, the graphs aren't isomorphic.
At each iteration of the main algorithm, you should generate a new data structure for each of the two graphs for the vertex groups that the next step will use. For each group, generate a list for each vertex of group indices/IDs that correspond to the vertexes that are adjacent to the vertex in question (include duplicate groups in this list). Check each group to see if the sorted group index/ID list for each contained vertex is the same. If this is the case, create a unmodified copy of this group in the next step's group structure. If this isn't the case, then for each unique list of group indices/IDs within that group, create a new group for vertexes within the original group that generated that list and add this new group to the next step's group structure. If you do not subdivide any of the groups of either graph in a given iteration, stop running the main portion of this algorithm. If you subdivide at least one group, you will need to once again check to make sure the group structures of the two graphs correspond to each other. This check will be similar to the one performed at the end of the algorithm's initialization (you may even be able to use the same function for both). If this check fails, then the graphs aren't isomorphic. If the check passes, then discard/free the current group structures and start the next iteration with the freshly created ones.
To make the process of determining "corresponding groups" easier, I would highly recommend using a predictable scheme for adding groups to the structure. For example, if you add groups during initialization in (degree, label) order, subdivide groups in ascending index order, and add subdivided groups to the new structure based on the order of the group index list (i.e., sorted by first listed index, then second, etc.), then corresponding groups between the two group structures will always have the same index, which makes the process of keeping track of which groups correspond to each other much easier.
If all groups contain 3 or fewer vertexes when the algorithm completes, then the graphs are isomorphic (for corresponding groups containing 2 or 3 vertexes, any vertex pairing is valid). If this isn't the case (this always happens for graphs where all nodes have equal degree and label, and sometimes happens for subgraphs with that property), then the graphs are not yet determined to be isomorphic or non-isomorphic. To differentiate between the two cases, choose an arbitrary node of the first graph's largest group and separate it into its own group. Then, for each node of the other graph's largest group, try running the algorithm again with that node separated into its own group. In essence, you are choosing an unpaired node from the first graph and pairing it by guess-and-check to every node in the second graph that is still a plausible pairing. If any of the forked iterations returns an isomorphism, the graphs are isomorphic. If none of them do, the graphs are not isomorphic.
For general cases, this algorithm is polynomial. In corner cases, the algorithm might be exponential. Whether this is the case or not is related to how frequently the algorithm can be forced to fork in the worst case of both graph input and node selection, which I have had difficulties trying to put useful bounds on. For example, although the algorithm forks at every step when comparing two full graphs, every branch of that tree produces an isomorphism; therefore, the algorithm returns in polynomial time in this case even though traversing the entire execution tree would require exponential time since traversing only one branch of the execution tree takes polynomial time.
Regardless, this algorithm should work well for your purposes. I hope my explanation of it was comprehensible; if not, I can try providing examples of the algorithm handling simple cases or expressing it as pseudocode instead.
Years ago, I created a simple and flexible algorithm for exactly this problem (graph isomorphism with labels).
I named it "Powerhash", and to create the algorithm it required two insights. The first is the power iteration graph algorithm, also used in PageRank. The second is the ability to replace power iteration's inside step function with anything that we want. I replaced it with a function that does the following on each iteration, and for each node:
Sort the hashes (from previous iteration) of the node's neighbors
Hash the concatenated sorted hashes
Replace node's hash with newly computed hash
On the first step, a node's hash is affected by its direct neighbors. On the second step, a node's hash is affected by the neighborhood 2-hops away from it. On the Nth step a node's hash will be affected by the neighborhood N-hops around it. So you only need to continue running the Powerhash for N = graph_radius steps. In the end, the graph center node's hash will have been affected by the whole graph.
To produce the final hash, sort the final step's node hashes and concatenate them together. After that, you can compare the final hashes to find if two graphs are isomorphic. If you have labels, then add them (on the first iteration) in the internal hashes that you calculate for each node.
For more on this you can look at my post here:
https://plus.google.com/114866592715069940152/posts/fmBFhjhQcZF
The algorithm above was implemented inside the "madIS" functional relational database. You can find the source code of the algorithm here:
https://github.com/madgik/madis/blob/master/src/functions/aggregate/graph.py
Just checking; do you mean strict graph isomorphism or something else? Isomorphic graphs have the same adjacency relations (I.e. if node A is adjacent to node B in one graph then node g(A) is adjacent to node g(B) in another graph that is the result of applying the transformation g to the first one...) If you just wanted to check of one graph has the same types and number of nodes as another then you can just compare counts.