I constructed a simple graph:
digraph{
rankdir = LR;
{
rank="source";
Sa;
Sb;
Sc;
St;
}
St -> {t_1[label="t",shape=plaintext];}
Na;
{t_a[label="t",shape=plaintext];}->Na
Sa->Na;
Sb->Na;
Sc->Na;
subgraph cluster_b {
fillcolor = "#ddDDdd";
style=filled;
label="";
Nb1;
Nb;
Nb1->Nb;
}
{t_2[label="t",shape=plaintext];}->Nb1
Sa->Nb;
Nc;
{t_c[label="t",shape=plaintext];}->Nc
Nd;
{t_d[label="t",shape=plaintext];}->Nd
Na->Nd;
Nb->Nc;
Nd->O1;
Nc->Nd;
{
rank="sink";
O1;
}
}
view online here
It seems that dot is ignoring the rank="source".
According to the documentation
If rank="min", all nodes are placed on the minimum rank. If
rank="source", all nodes are placed on the minimum rank, and the only
nodes on the minimum rank belong to some subgraph whose rank attribute
is "source" or "min".
The Sx nodes should be the only ones on the lowest ranks.
(as if there would be an additional St->t_2[style=invis]; edge).
Is this a bug? do i misunderstand the documentation?
I vote "bug" (probably in software, maybe in documentation).
Here is a work-around, using the minlen attribute to "force" the Nb node (and the cluster) "down" the ranks.
digraph{
rankdir = LR;
{
rank="source";
Sa;
Sb;
Sc;
St;
}
St -> {t_1[label="t",shape=plaintext];}
Na;
{t_a[label="t",shape=plaintext];}->Na
Sa->Na;
Sb->Na;
Sc->Na;
subgraph cluster_b {
fillcolor = "#ddDDdd";
style=filled;
label="";
Nb1;
Nb;
Nb1->Nb;
}
{t_2[label="t",shape=plaintext];}->Nb1
Sa->Nb [minlen=3] // will this move the cluster??
Nc;
{t_c[label="t",shape=plaintext];}->Nc
Nd;
{t_d[label="t",shape=plaintext];}->Nd
Na->Nd;
Nb->Nc;
Nd->O1;
Nc->Nd;
{
rank="sink";
O1;
}
}
Giving:
Related
I had the solution of classical Mother vertex problem using DSU (disjoint data set). I have used path compression.
i wanted to know if it is correct or not.I think time complexity O(E log(V)).
the solution proceeds as
initialise each vertex's parent as it self
as soon as a edge comes, try to join them. note that 1->2 can't be merged if 2 already has some other parent! like if graph is 1->2 , 3->4 , 2->4
here edges 1->2 merge as par[1] = par[2]= 1 and 3->4 merge as par[3]= par[4] =3.
when it comes to merge 2->4, we can't since par[4]!=4, aka, it has some other incoming edge, out of this group.
atlast, all the parent vertices are checked, if they are all equal then, mother vertexos present.
code is :
class dsu
{
public:
int cap;
vector<int> par;
dsu(int n)
{
cap = n;
par.resize(cap);
for(int i=0; i<cap; i++)
par[i] = i;
}
int get(int a)
{
while(a!= par[a])
{
par[a] = par[par[a]];
a = par[a];
}
return a;
}
void join(int a, int b)
{
a= get(a);
int pb= get(b);
if(pb!=b)
return ;
par[pb] = a;
}
};
int findMother(int n, vector<int> g[])
{
// Your code here
// do disjoint data set, if everyone;s parent is same woohla! i have found the mother vertex
dsu arr(n);
for(int i=0; i< n; i++)
{
for(auto a: g[i])
{
arr.join(i,a);}
}
int mother = arr.get(0);
for(int i=1; i<n; i++)
{
if(mother != arr.get(i))
return -1;
}
return mother;
}
after some research I have fount out that, it is correct. It can be used to find the mother vertex .
Given matrices A and B the tropical product is defined to be the usual matrix product with multiplication traded out for addition and addition traded out for minimum. That is, it returns a new matrix C such that,
C_ij = minimum(A_ij, B_ij, A_i1 + B_1j, A_i2 + B_12,..., A_im + B_mj)
Given the underlying adjacency matrix A_g of a graph g, the nth "power" with respect to the tropical product represents the connections between nodes reachable in at most n steps. That is, C_ij = (A**n)_ij has value m if nodes i and j are separated by m<=n edges.
In general, given some graph with N nodes. The diameter of the graph can only be at most N; and, given a graph with diameter k, A**n = A**k for all n>k and the matrix D_ij = A**k is called the "distance matrix" entries representing the distances between all nodes in the graph.
I have written a tropical product function in chapel and I want to write a function that takes an adjacency matrix and returns the resulting distance matrix. I have tried the following approaches to no avail. Guidance in getting past these errors would be greatly appreciated!
proc tropicLimit(A:[] real,B:[] real) {
var R = tropic(A,B);
if A == R {
return A;
} else {
tropicLimit(R,B);
}
}
which threw a domain mismatch error so I made the following edit:
proc tropicLimit(A:[] real,B:[] real) {
var R = tropic(A,B);
if A.domain == R.domain {
if && reduce (A == R) {
return R;
} else {
tropicLimit(R,B);
}
} else {
tropicLimit(R,B);
}
}
which throws
src/MatrixOps.chpl:602: error: control reaches end of function that returns a value
proc tropicLimit(A:[] real,B:[] real) {
var R = tropic(A,B);
if A.domain == R.domain {
if && reduce (A == R) { // Line 605 is this one
} else {
tropicLimit(R,B);
}
} else {
tropicLimit(R,B);
}
return R;
}
Brings me back to this error
src/MatrixOps.chpl:605: error: halt reached - Sparse arrays can't be zippered with anything other than their domains and sibling arrays (CS layout)
I also tried using a for loop with a break condition but that didn't work either
proc tropicLimit(B:[] real) {
var R = tropic(B,B);
for n in B.domain.dim(2) {
var S = tropic(R,B);
if S.domain != R.domain {
R = S; // Intended to just reassign the handle "R" to the contents of "S" i.o.w. destructive update of R
} else {
break;
}
}
return R;
}
Any suggestions?
src/MatrixOps.chpl:605: error: halt reached - Sparse arrays can't be zippered with anything other than their domains and sibling arrays (CS layout)
I believe you are encountering a limitation of zippering sparse arrays in the current implementation, documented in #6577.
Removing some unknowns from the equation, I believe this distilled code snippet demonstrates the issue you are encountering:
use LayoutCS;
var dom = {1..10, 1..10};
var Adom: sparse subdomain(dom) dmapped CS();
var Bdom: sparse subdomain(dom) dmapped CS();
var A: [Adom] real;
var B: [Bdom] real;
Adom += (1,1);
Bdom += (1,1);
A[1,1] = 1.0;
B[1,1] = 2.0;
writeln(A.domain == B.domain); // true
var willThisWork = && reduce (A == B);
// dang.chpl:19: error: halt reached - Sparse arrays can't be zippered with
// anything other than their domains and sibling arrays (CS layout)
As a work-around, I would suggest looping over the sparse indices after confirming the domains are equal and performing a && reduce. This is something you could wrap in a helper function, e.g.
proc main() {
var dom = {1..10, 1..10};
var Adom: sparse subdomain(dom) dmapped CS();
var Bdom: sparse subdomain(dom) dmapped CS();
var A: [Adom] real;
var B: [Bdom] real;
Adom += (1,1);
Bdom += (1,1);
A[1,1] = 1.0;
B[1,1] = 2.0;
if A.domain == B.domain {
writeln(equal(A, B));
}
}
/* Some day, this should be A.equals(B) ! */
proc equal(A: [], B: []) {
// You could also return 'false' if domains do not match
assert(A.domain == B.domain);
var s = true;
forall (i,j) in A.domain with (&& reduce s) {
s &&= (A[i,j] == B[i,j]);
}
return s;
}
src/MatrixOps.chpl:602: error: control reaches end of function that returns a value
This error is a result of not returning something in every condition. I believe you intended to do:
proc tropicLimit(A:[] real,B:[] real) {
var R = tropic(A,B);
if A.domain == R.domain {
if && reduce (A == R) {
return R;
} else {
return tropicLimit(R,B);
}
} else {
return tropicLimit(R,B);
}
}
I am trying to use glpk to enumerate all possible paths from a source node to a target node, but i am having some problems with the syntax. Here's my current code (adapted from the shortest path example):
param n, integer, > 0;
/* number of nodes */
set E, within {i in 0..n, j in 0..n};
/* set of edges */
param s, in {0..n};
/* source node */
param t, in {0..n};
/* target node */
var x{(i,j) in E}, >= 0;
/* x[i,j] = 1 means that edge (i,j) belong to shortest path;
x[i,j] = 0 means that edge (i,j) does not belong to shortest path;
note that variables x[i,j] are binary, however, there is no need to
declare them so due to the totally unimodular constraint matrix */
s.t. r{i in 1..n}: sum{(j,i) in E} x[j,i] + (if i = s then 1) =
sum{(i,j) in E} x[i,j] + (if i = t then 1);
/* conservation conditions for unity flow from s to t; every feasible
solution is a path from s to t */
var test, integer, =0;
minimize Z: sum{(i,j) in E} x[i,j];
/* objective function is the path length to be minimized */
solve;
for {(i,j) in E: x[i,j]>0}{
printf "%d --> %d ", i, j;
}
#printf " tamanho do caminho %d ", count;
So this function, biggest_dist, finds the diameter of a graph(the given graph in the task is always a tree).
What I want it instead to find is to find the center of the diameter, the node with the least maximum distance to all the other nodes.
I "kinda" understand the idea that we can do this by finding the path from u to t (distance between uand tis the diameter) by keeping track of the parent for each node. From there I choose the node in the middle of uand t? My question is how do I implement that for this function here? Will this make it output node 2 for this graph?
int biggest_dist(int n, int v, const vector< vector<int> >& graph)
//n are amount of nodes, v is an arbitrary vertex
{ //This function finds the diameter of thegraph
int INF = 2 * graph.size(); // Bigger than any other length
vector<int> dist(n, INF);
dist[v] = 0;
queue<int> next;
next.push(v);
int bdist = 0; //biggest distance
while (!next.empty()) {
int pos = next.front();
next.pop();
bdist = dist[pos];
for (int i = 0; i < graph[pos].size(); ++i) {
int nghbr = graph[pos][i];
if (dist[nghbr] > dist[pos] + 1) {
dist[nghbr] = dist[pos] + 1;
next.push(nghbr);
}
}
}
return bdist;
}
As a matter of fact, this function does not compute the diameter. It computes the furthest vertex from a given vertex v.
To compute the diameter of a tree, you need first to choose an arbitrary vertex (let's say v), then find the vertex that is furthest away from v (let's say w), and then find a vertex that is furthest away from w, let's sat u. The distance between w and u is the diameter of the tree, but the distance between v and w (what your function is doing) is not guaranteed to be the diameter.
To make your function compute the diameter, you will need to make it return the vertex it found alongside with the distance. Conveniently, it will always be the last element you process, so just make your function remember the last element it processed alongside with the distance to that element, and return them both. Then call your function twice, first from any arbitrary vertex, then from the vertex that the first call returned.
To make it actually find the center, you can also remember the parent for each node during your BFS. To do so, allocate an extra array, say prev, and when you do
dist[nghbr] = dist[pos] + 1;
also do
prev[nghbr] = pos;
Then at the end of the second call to the function, you can just descend bdist/2 times into the prev, something like:
center = lastVertex;
for (int i = 0; i + i < bdist; ++ i) center = prev[center];
So with a little tweaks to your function (making it return the furthest vertex from v and a vertex that is on the middle of that path, and not return the diameter at all), this code is likely to return you the center of the tree (I only tested it on your example, so it might have some off by one errors)
pair<int, int> biggest_dist(int n, int v, const vector< vector<int> >& graph)
{
int INF = 2 * graph.size(); // Bigger than any other length
vector<int> dist(n, INF);
vector<int> prev(n, INF);
dist[v] = 0;
queue<int> next;
next.push(v);
int bdist = 0; //biggest distance
int lastV = v;
while (!next.empty()) {
int pos = next.front();
next.pop();
bdist = dist[pos];
lastV = pos;
for (int i = 0; i < graph[pos].size(); ++i) {
int nghbr = graph[pos][i];
if (dist[nghbr] > dist[pos] + 1) {
dist[nghbr] = dist[pos] + 1;
prev[nghbr] = pos;
next.push(nghbr);
}
}
}
int center = lastV;
for (int i = 0; i + i < bdist; ++ i) center = prev[center];
return make_pair(lastV, center);
}
int getCenter(int n, const vector< vector<int> >& graph)
{
// first call is to get the vertex that is furthest away from vertex 0, where 0 is just an arbitrary vertes
pair<int, int> firstResult = biggest_dist(n, 0, graph);
// second call is to find the vertex that is furthest away from the vertex just found
pair<int, int> secondResult = biggest_dist(n, firstResult.first, graph);
return secondResult.second;
}
Suppose pre-order and post-order traversals and k are given. How many k-ary trees are there with these traversals?
An k-ary tree is a rooted tree for which each vertex has at most k children.
It depends on the particular traversal pair. For instance
pre-order: a b c
post-order: b c a
describes only one possible tree (the fewest possible, unless you include inconsistent traversal pairs). On the other hand:
pre-order: a b c
post-order: c b a
describes 2^(3-1) = 4 trees (the most possible amongst all scenarios where the traversals have 3 nodes and k can be anything), namely the 4 3-node lines.
If you want to know the number of possible binary trees having Pre-order and Post-order traversals, you should first draw one possible tree. then count the number of nodes with only one child. The total number of possible trees would be : 2^(Number of single-child nodes)
as an example:
pre: adbefgchij
post: dgfebijhca
i draw one tree that has 3 single-child nodes. So , the number of possible trees is 8.
First determine the corresponding range of sub-tree by DFS, and get the amount of sub-tree, then solve it through combination of the sub-trees.
const int maxn = 30;
int C[maxn][maxn];
char pre[maxn],post[maxn];
int n,m;
void prepare()
{
memset(C,0,sizeof(C));
for(int i=0;i<maxn;i++)
{
C[i][0] = 1;
}
for(int i=1;i<maxn;i++)
{
for(int j=1;j<=i;j++)
{
C[i][j] = C[i-1][j-1] + C[i-1][j];
}
}
return;
}
int dfs(int rs,int rt,int os,int ot)
{
if(rs == rt) return 1;
int son = 0,res = 1;
int l = rs + 1,r = os;
while(l <= rt)
{
while(r < ot)
{
if(pre[l] == post[r])
{
son++;
break;
}
r++;
}
res *= dfs(l , l + r - os , os , r);
l += r - os + 1;
rs = l - 1;
os = ++r;
}
return res * C[m][son];
}
int main()
{
prepare();
while(scanf("%d",&m) && m)
{
scanf("%s %s",pre,post);
n = strlen(pre);
printf("%d\n",dfs(0,n-1,0,n-1));
}
return 0;
}