Please read my question before you report it as a duplicate. In the literature, to find the minimum height of a tree, the common approach is as follow:
int minDepth(TreeNode root) {
if (root == null) { return 0;}
return 1 + Math.min(minDepth(root.left), minDepth(root.right));
}
However, I think it does not distinguish between a leaf and a node with only one child and so it returns a wrong value. For example if our tree looks like this:
A is root
B is the left child of A
C is the right child of B
M is the left child of C
This function returns one while the leaf is 3 hop away from the root and so the min height is 4.
Since this recursive version is generally suggested in the literature, I think I am missing something.
Could somebody clear this for me?
Your comments indicate that the texts where you found this actually use the same definitions for the terms I use. If that is indeed the case, then the question is not why the algorithm you have shown is correct – it simply is wrong under those conditions.
Just take the third simplest binary tree, the one consisting of two nodes. It has exactly one leaf, its depth is two and its minimal depth is also two. But the algorithm you quoted returns the value one. So, unless the authors use a different definition for one of the terms (e.g., “minimal height” meaning “shortest path out of the tree”/“shortest path to a null pointer”), the result is simply wrong.
Related
I am new to AI and was going through Peter Norvig book. I've looked into this question already What is the number of nodes generated by breadth-first search?.
It says that if we apply goal test to each node when it is selected for expansion then we have nodes = 1 + b + b^2 + b^3 + ... + b^d + (b^(d+1) - b)
But what if my goal state is a leaf node at the final depth. So there is no depth at all after the goal. Then how can b^(d+1) evaluate?. eg: in a tree with max depth 3, if my goal lies at depth 3, then how would I evaluate b^(3+1) when there is no 4th level at all?. Please clear my doubt. Thanks in advance!
Note that the answer you linked mentioned that that is the amount of nodes that will be generated in the worst case.
Generated means that not all of those nodes are tested to see if they are the goal; they're simply generated and stored so that they can eventually be compared to the goal in case the goal is not found yet.
Worst case has two important implications. Try to visualize the Breadth-First Search going from left to right, then down one level, then left to right again, then down, etc. With worst case we assume that, on whatever depth level d the goal is located, the goal is the very last (rightmost) node. This means that all nodes to the left of it are compared to the goal node, and any successors/children of them are generated as well.
Now, I know that you said that in your case there are no nodes at a depth level below d, but the second implication of saying worst case is that we do assume there are basically infinitely many depth levels.
Indeed, for your case that equation is not entirely correct, but this is simply because you don't have the worst case. In your case, the search process would indeed not have to generate the last (b^(d+1) - b) nodes of the equation.
A final note on the terminology you used: you asked how b^(d+1) (for example, b^(3+1) can be evaluated if there is no depth level below d = 3. There is still no problem to mathematically evaluate that term. Even in your case there is no depth level 4, we can still mathematically evaluate the term b^(3+1). In your case it would not make sense to do so, because it is not correct, but we can still evaluate the term just fine.
I'm trying to use Neo4j to analyze relationships in a family tree. I've modeled it like so:
(p1:Person)-[:CHILD]->(f:Family)<-[:FATHER|MOTHER]-(p2)
I know I could have left out the family label and just had children connected to each parent, but that's not practical for my purposes. Here's an example of my graph and the black line is the path I want it to generate:
I can query for it with
MATCH p=(n {personID:3})-[:CHILD]->()<-[:FATHER|MOTHER]-()-[:CHILD]->()<-[:FATHER|MOTHER]-()-[:CHILD]->()<-[:FATHER|MOTHER]-() RETURN p
but there's a repeating pattern to the relationships. Could I do something like:
MATCH p=(n {personID:3})(-[:CHILD]->()<-[:FATHER|MOTHER]-())* RETURN p
where the * means repeat the :CHILD then :FATHER|MOTHER relationships, with the directions being different? Obviously if the relationships were all the same direction, I could use
-[:CHILD|FATHER|MOTHER*]->
I want to be able to query it from Person #3 all the way to the top of the graph like a pedigree chart, but also be specific about how many levels if needed (like 3 generations, as opposed to end-of-line).
Another issue I'm having with this, is if I don't put directions on the relationships like -[:CHILD|FATHER|MOTHER*]-, then it will start at Person #3, and go both in the direction I want (alternating arrows), but also descend back down the chain finding all the other "cousins, aunts, uncles, etc.".
Any seasoned Cypher experts that an help me?
I am just on the same problem. And I found out that the APOC Expand path procedures are just accomplishing what you/we want.
Applied to your example, you could use apoc.path.subgraphNodes to get all ancestors of Person #3:
MATCH (p1:Person {personId:3})
CALL apoc.path.subgraphNodes(p1, {
sequence: '>Person,CHILD>,Family,<MOTHER|<FATHER'
}) YIELD node
RETURN node
Or if you want only ancestors up to the 3 generations from start person, add maxLevel: 6 to config (as one generation is defined by 2 relationships, 3 generations are 6 levels):
MATCH (p1:Person {personId:3})
CALL apoc.path.subgraphNodes(p1, {
sequence: '>Person,CHILD>,Family,<MOTHER|<FATHER',
maxLevel: 6
}) YIELD node
RETURN node
And if you want only ancestors of 3rd generation, i.e. only great-grandparents, you can also specify minLevel (using apoc.path.expandConfig):
MATCH (p1:Person {personId:3})
CALL apoc.path.expandConfig(p1, {
sequence: '>Person,CHILD>,Family,<MOTHER|<FATHER',
minLevel: 6,
maxLevel: 6
}) YIELD path
WITH last(nodes(path)) AS person
RETURN person
You could reverse the directionality of the CHILD relationships in your model, as in:
(p1:Person)<-[:CHILD]-(f:Family)<-[:FATHER|MOTHER]-(p2)
This way, you can use a simple -[:CHILD|FATHER|MOTHER*]-> pattern in your queries.
Reversing the directionality is actually intuitive as well, since you can then more naturally visualize the graph as a family tree, with all the arrows flowing "downwards" from ancestors to descendants.
Yeah, that's an interesting case. I'm pretty sure (though I'm open to correction) that this is just not possible. Would it be possible for you to have and maintain both? You could have a simple cypher query create the extra relationships:
MATCH (parent)-[:MOTHER|FATHER]->()<-[:CHILD]-(child)
CREATE (child)-[:CHILD_OF]->parent
Ok, so here's a thought:
MATCH path=(child:Person {personID: 3})-[:CHILD|FATHER|MOTHER*]-(ancestor:Person),
WHERE ancestor-[:MOTHER|FATHER]->()
RETURN path
Normally I'd use a second clause in the MATCH like this:
MATCH
path=(child:Person {personID: 3})-[:CHILD|FATHER|MOTHER*]-(ancestor:Person),
ancestor-[:MOTHER|FATHER]->()
RETURN path
But Neo4j (at least by default, I think) doesn't traverse back through the path. Maybe comma-separating would be fine and this would be a problem:
MATCH path=(child:Person {personID: 3})-[:CHILD|FATHER|MOTHER]-(ancestor:Person)-[:MOTHER|FATHER]->()
I'm curious to know what you find!
I am trying to solve some problems from some paper exams and I have this problem:
Consider the graph G = (N,A) where N={a,b,c,d,e,f,g,h} and A is the
following set of arcs:
{(0,5),(5,4),(4,5),(4,1),(1,2),(2,3),(3,4),(4,3),(0,6),(6,7)} and I
have to draw the depth-first search tree T for G with the root being 0
This is the graph:
I got the following tree:
and the answer is this one:
(for both cases above, please ignore the arrows)
and I don't understand why. Anyone who can explain me what I'm doing wrong?
Thanks!
Both trees are correct in the sense that both of them can be generated by depth-first search. To my understanding, the key point here is that for a given graph, several depth-first search trees may exist, depending on the sequence in which children of the current node are selected. More precisely, depth-first search, without any clear rule on how to iterate children, is not a deterministic procedure. As indicated in the comments, the solution found by you can be obtained by selecting a child with minimum node index, whereas the proposed solution can be generated by selecting a child with maximum node index.
Perform Depth-first Search on the graph shown starting with vertex a. When you traverse the neighbours, process them in alphabetical order.
The question is to find the DFI, Level and the Parent of each vertex.
Here is a picture of it:
I'm unsure of how to get going with this, it is a practice question for an upcoming exam. I know for depth first search, it uses a stack and it will start at vertex a and go in alphabetical order in the stack but i'm not sure how I would get the values for each of the columns. Can someone explain further or help me with this?
So you start at 'a' and must traverse the nodes in alphabetical order so from a you either have the option of going to b or g so you choose b because it is first alphabetically. from b your only choice is g and so on....
now for your values. the parent of a is null since you have no previous nodes the parent of b is a and the parent of g is b and so on.
the dfs level is the level that it would end up on a tree. so imagine that you do your traversal then erase all lines that weren't part of the traversal. and then you take your root and 'shake it out' what i mean is you rearrange it so that it looks like a tree. (this particular graph is very uninteresting) and then you assign levels based on that tree.
And the dfs index is simply the order in which you touched the nodes.
The folowing are for your graph but using g as a starting point....I think it makes it slightly more intersting
the numbers are the order in which the edges were taken.
Here is what i was talking about when i said 'shake it out' this is what your tree looks like and in blue i show the level of each node(0 based). I hope the images make it a little more understandable.
the one i drew( the terrible free hand one) was formed by deleting all of the edges that weren't used and then rearranging them to look like a tree.
You can think of the depth as how many steps did i have to take from the root to get to the current node. so from g to b is 1 step so depth of 1 from g to i 3 because we go from g->c->d->i 3 steps. after you have made your traversal you ignore the fact that you can in fact get from g to i in two steps(g->h->i) because it wasnt part of the traversal
The index is simply the number in order that the node is visited. a is first, write 1 there. Knowing depth first search as you do, you should know what the second node is; so write 2 under that. Depth is how high a node is; every time you deepen the depth, it increases, and whenever you go shallower, it's less. So a is on depth 1; the next node and its sister will be on depth 2, etc. The parent is the letter identifying the node that you just came from; so a has no parent, and the node with index 2 will have a as parent.
If your class uses a zero-based numbering system, replace 2 in the above paragraph with 1, and 1 with 0. If you have no idea what "zero-based numbering system" is, ignore this paragraph.
I have been trying to learn Tarjan's algorithm from Wikipedia for 3 hours now, but I just can't make head or tail of it. :(
http://en.wikipedia.org/wiki/Tarjan's_strongly_connected_components_algorithm#cite_note-1
Why is it a subtree of the DFS tree? (actually DFS produces a forest? o_O)
And why does v.lowlink=v.index imply that v is a root?
Can someone please explain this to me / give the intuition or motivation behind this algorithm?
The idea is: When traversing the tree, every time you've searched through a branch and are backtracking, you check whether you've encountered an edge to an 'upper' node in the tree.
If you didn't (if (v.lowlink = v.index)), then you've just completed an SCC - it consists of the current node and all nodes on the stack. That's exactly a subtree of the DFS tree, except for the nodes in SCCs that were already completed.
If you did, you propagate this information to 'upper' nodes (v.lowlink := min(v.lowlink, w.lowlink)), because combined with the path in DFS tree the edge creates an 'upward' path.
DFS produces a forest, but you always consider one tree a time. An SCC is always included in one DFS tree, otherwise (being an SCC) there would be a path in both directions between both (all) trees in question - that's a contradiction.
just adding to pjotr's answer: v.lowlink is basically the index of the upmost node that you have found in the tree. Keep in mind that upmost in this context means minimum as you keep increasing indices as you walk down. Now after processing all your successors, there's basically three cases:
v.lowlink < v.index: This indicates that you have found a back edge. Note that we haven't
just found any back edge, but one that points to a node that is "above" the current one. That's what v.lowlink < v.index implies.
v.lowlink = v.index: What we know in this case is that there is no back edge referring to anything above the current node. There might be a back edge to this node (which means that one of your successor nodes w has a lowlink such that w.lowlink = v.lowlink = v.index). It could also be that there was a back edge referring to something below the current node, which means that there was a strongly-connected component below the current node that has been printed out already. The current node, however, is definitely the root of a strongly-connected component as well.
v.lowlink > v.index: That's actually not possible. I'm just listing it for the sake of completeness. ;)
Hope it helps!
Some Intuition about the Tarjan's Algorithm:
During DFS, when we encounter a back edge from vertex v, we update its lowest reachable ancestor i.e. we update the value of low[v]
Now when the all the outgoing edges of a vertex are processed i.e we are about to exit the DFS call for the vertex v, we check the value of low[v], whether low[v] == v (Explanation below). If not this means v is not the root of the SCC and we now give the benefit to the parent of v i.e. the lowest reachable ancestor of parent[v] is now changed to low[v].
This sounds logical as if although there is no direct back edge from the parent[v] to the ancestor of v, but there is a path (back edge of v + edge towards v) via which the parent[v] can still reach the ancestor of v.
Thus we have also updated the low[parent[v]] here.
Therefore, we will keep on updating this chain and low[v] for all v will keep on updating until, we reach to the ancestor (via backtracking). For this ancestor low[v] will be equal to v. And thus this will act as the root of the SCC.
Hope this helps
Path for Mastering Bridges and Articulation Algorithms
Watch this video to get an awesome feeling that you have understood the
algorithm very well.
practice the algorithm from here and here.
practice problems to master it:
A. Cutting Figure
EC_P - Critical Edges
SUBMERGE - Submerging Islands
POLQUERY - Police Query