Leetcode 110 Balanced Binary Tree - recursion

Given a binary tree, determine if it is height-balanced.
So I am have been trying and learning to code on leetcode and was trying to understand this code for problem #110. Anyhoo, below is the code
class Solution(object):
def isBalanced(self, root):
def check(root):
if root is None:
return 0
left = check(root.left)
right = check(root.right)
if left == -1 or right == -1 or abs(left - right) > 1:
return -1
return 1 + max(left, right)
return check(root) != -1
I am new to recursion and was trying to understand this code. I understand the first if is a base case scenario and the second if is for when the tree is not balanced. But I don't understand how the third if condition works and how it helps calculate the height of a subtree.
Ps. I...I don't know. I have been trying to learn to code for a while now and seem to make a little progress and then it becomes overwhelming and then there comes gap for like a week or two. I have to drag my ass to sit again with it. Has it happened to anyone else. I know I like to code because I like to see the test cases pass when it goes all green lol. But it seems to come harder than I expected. lol just venting on the internet I guess. let me know guys. Also any good resource to learn recursion, please link them below.

Related

Using results from ODEProblem while it is running

I’m currently studying the documentation of DifferentialEquations.jl and trying to port my older computational neuroscience codes for using it instead of my own, less elegant and performant, ODE solvers. While doing this, I stumbled upon the following question: is it possible to access and use the results returned from the solver as soon as the current step is returned (instead of waiting for the problem to finish)?
I’m looking for a way to e.g. plot in real-time the voltage levels of a simulated neuron, which seems like a simple enough task and one that’s probably trivial to do using already existing Julia packages but I can’t figure out how. Does it have to do anything with callbacks? Thanks in advance.
Plots.jl doesn't seem to be animating for me right now, but I'll show you the steps anyways. Yes, you can use a DiscreteCallback for this. If you make condition(u,t,integrator)=true then the affect! is called every step, and you could do that.
But, I think using the integrator interface is perfect for this case. Let me show you an example of this. Take the 2D problem from the tutorial:
using DifferentialEquations
using Plots
A = [1. 0 0 -5
4 -2 4 -3
-4 0 0 1
5 -2 2 3]
u0 = rand(4,2)
tspan = (0.0,1.0)
f(u,p,t) = A*u
prob = ODEProblem(f,u0,tspan)
Now instead of using solve, use init to get an integrator out.
integrator = init(prob,Tsit5())
The integrator interface is defined in full at its documentation page, but the basic usage is that you can step using step!. If you put that in a loop and keep stepping then that's essentially what solve does. But it also has the iterator interface, so if you do something like for integ in integrator then inside of the for loop integ will be the current state of the integrator, with values integ.u at time point integ.t. It also has all sorts of things like a plot recipe for intermediate interpolation integ(t) (this is true even when dense=false because it's free and doesn't require extra saving allocations, so feel free to use it).
So, you can do
p = plot(integrator,markersize=0,legend=false,xlims=tspan)
anim = #animate for integ in integrator
plot!(p,integrator,lw=3)
end
plot(p)
gif(anim, "test.gif", fps = 2)
and Plots.jl will give you the animated gif that adds the current interval at each step. Here's what the end plot looks like:
It colored differently in each step because it was a different plot, so you can see how it continued. Of course, you can do anything inside of that loop, or if you want more control you can manually step!(integrator) as necessary.

Could I ask for physical analogies or metaphors for recursion?

I am suddenly in a recursive language class (sml) and recursion is not yet physically sensible for me. I'm thinking about the way a floor of square tiles is sometimes a model or metaphor for integer multiplication, or Cuisenaire Rods are a model or analogue for addition and subtraction. Does anyone have any such models you could share?
Imagine you're a real life magician, and can make a copy of yourself. You create your double a step closer to the goal and give him (or her) the same orders as you were given.
Your double does the same to his copy. He's a magician too, you see.
When the final copy finds itself created at the goal, it has nowhere more to go, so it reports back to its creator. Which does the same.
Eventually, you get your answer back – without having moved an inch – and can now create the final result from it, easily. You get to pretend not knowing about all those doubles doing the actual hard work for you. "Hmm," you're saying to yourself, "what if I were one step closer to the goal and already knew the result? Wouldn't it be easy to find the final answer then ?" (*)
Of course, if you were a double, you'd have to report your findings to your creator.
More here.
(also, I think I saw this "doubles" creation chain event here, though I'm not entirely sure).
(*) and that is the essence of the recursion method of problem solving.
How do I know my procedure is right? If my simple little combination step produces a valid solution, under assumption it produced the correct solution for the smaller case, all I need is to make sure it works for the smallest case – the base case – and then by induction the validity is proven!
Another possibility is divide-and-conquer, where we split our problem in two halves, so will get to the base case much much faster. As long as the combination step is simple (and preserves validity of solution of course), it works. In our magician metaphor, I get to create two copies of myself, and combine their two answers into one when they are finished. Each of them creates two copies of themselves as well, so this creates a branching tree of magicians, instead of a simple line as before.
A good example is the Sierpinski triangle which is a figure that is built from three quarter-sized Sierpinski triangles simply, by stacking them up at their corners.
Each of the three component triangles is built according to the same recipe.
Although it doesn't have the base case, and so the recursion is unbounded (bottomless; infinite), any finite representation of S.T. will presumably draw just a dot in place of the S.T. which is too small (serving as the base case, stopping the recursion).
There's a nice picture of it in the linked Wikipedia article.
Recursively drawing an S.T. without the size limit will never draw anything on screen! For mathematicians recursion may be great, engineers though should be more cautious about it. :)
Switching to corecursion ⁄ iteration (see the linked answer for that), we would first draw the outlines, and the interiors after that; so even without the size limit the picture would appear pretty quickly. The program would then be busy without any noticeable effect, but that's better than the empty screen.
I came across this piece from Edsger W. Dijkstra; he tells how his child grabbed recursions:
A few years later a five-year old son would show me how smoothly the idea of recursion comes to the unspoilt mind. Walking with me in the middle of town he suddenly remarked to me, Daddy, not every boat has a lifeboat, has it? I said How come? Well, the lifeboat could have a smaller lifeboat, but then that would be without one.
I love this question and couldn't resist to add an answer...
Recursion is the russian doll of programming. The first example that come to my mind is closer to an example of mutual recursion :
Mutual recursion everyday example
Mutual recursion is a particular case of recursion (but sometimes it's easier to understand from a particular case than from a generic one) when we have two function A and B defined like A calls B and B calls A. You can experiment this very easily using a webcam (it also works with 2 mirrors):
display the webcam output on your screen with VLC, or any software that can do it.
Point your webcam to the screen.
The screen will progressively display an infinite "vortex" of screen.
What happens ?
The webcam (A) capture the screen (B)
The screen display the image captured by the webcam (the screen itself).
The webcam capture the screen with a screen displayed on it.
The screen display that image (now there are two screens displayed)
And so on.
You finally end up with such an image (yes, my webcam is total crap):
"Simple" recursion is more or less the same except that there is only one actor (function) that calls itself (A calls A)
"Simple" Recursion
That's more or less the same answer as #WillNess but with a little code and some interactivity (using the js snippets of SO)
Let's say you are a very motivated gold-miner looking for gold, with a very tiny mine, so tiny that you can only look for gold vertically. And so you dig, and you check for gold. If you find some, you don't have to dig anymore, just take the gold and go. But if you don't, that means you have to dig deeper. So there are only two things that can stop you:
Finding some gold nugget.
The Earth's boiling kernel of melted iron.
So if you want to write this programmatically -using recursion-, that could be something like this :
// This function only generates a probability of 1/10
function checkForGold() {
let rnd = Math.round(Math.random() * 10);
return rnd === 1;
}
function digUntilYouFind() {
if (checkForGold()) {
return 1; // he found something, no need to dig deeper
}
// gold not found, digging deeper
return digUntilYouFind();
}
let gold = digUntilYouFind();
console.log(`${gold} nugget found`);
Or with a little more interactivity :
// This function only generates a probability of 1/10
function checkForGold() {
console.log("checking...");
let rnd = Math.round(Math.random() * 10);
return rnd === 1;
}
function digUntilYouFind() {
if (checkForGold()) {
console.log("OMG, I found something !")
return 1;
}
try {
console.log("digging...");
return digUntilYouFind();
} finally {
console.log("climbing back...");
}
}
let gold = digUntilYouFind();
console.log(`${gold} nugget found`);
If we don't find some gold, the digUntilYouFind function calls itself. When the miner "climbs back" from his mine it's actually the deepest child call to the function returning the gold nugget through all its parents (the call stack) until the value can be assigned to the gold variable.
Here the probability is high enough to avoid the miner to dig to the earth kernel. The earth kernel is to the miner what the stack size is to a program. When the miner comes to the kernel he dies in terrible pain, when the program exceed the stack size (causes a stack overflow), it crashes.
There are optimization that can be made by the compiler/interpreter to allow infinite level of recursion like tail-call optimization.
Take fractals as being recursive: the same pattern get applied each time, yet each figure differs from another.
As natural phenomena with fractal features, Wikipedia presents:
Moutain ranges
Frost crystals
DNA
and, even, proteins.
This is odd, and not quite a physical example except insofar as dance-movement is physical. It occurred to me the other morning. I call it "Written in Latin, solved in Hebrew." Huh? Surely you are saying "Huh?"
By it I mean that encoding a recursion is usually done left-to-right, in the Latin alphabet style: "Def fac(n) = n*(fac(n-1))." The movement style is "outermost case to base case."
But (please check me on this) at least in this simple case, it seems the easiest way to evaluate it is right-to-left, in the Hebrew alphabet style: Start from the base case and move outward to the outermost case:
(fac(0) = 1)
(fac(1) = 1)*(fac(0) = 1)
(fac(2))*(fac(1) = 1)*(fac(0) = 1)
(fac(n)*(fac(n-1)*...*(fac(2))*(fac(1) = 1)*(fac(0) = 1)
(* Easier order to calculate <<<<<<<<<<< is leftwards,
base outwards to outermost case;
more difficult order to calculate >>>>>> is rightwards,
outermost case to base *)
Then you do not have to suspend items on the left while awaiting the results of calculations further right. "Dance Leftwards" instead of "Dance rightwards"?

Can someone explain me this kind of recursion?

I want to understand recursion.
I understand stupid example with math but i m not sure to know the essence of it.
I have 1 example that i don t understand how it works:
TREE-ROOT-INSERT(x, z)
if x = NIL
return z
if z.key < x.key
x.left = TREE-ROOT-INSERT(x.left, z)
return RIGHT-ROTATE(x)
else
x.right = TREE-ROOT-INSERT(x.right, z)
return LEFT-ROTATE(x)
I know what this code does:
First insert a node in a BST and then rotate each time so the new node became the root.
But in my mind analysing the code i suppose that it insert the node where it has to go and then JUST 1 TIME it rotates the tree.
How is it possible that the tree is rotated every time?
You need to maintain your place in the recursive call for each level of the tree. When you hit return RIGHT-ROTATE (or left) for the first time, you're not completely done; you take the tree that is the result of the ROTATE function, and place it in the code where the recursive TREE-ROOT-INSERT call was one level higher in the stack. You then rotate again, and return the current tree one level higher up in the stack, until you've hit the original root of the tree.
What is important for understanding recursion is to think of the recursive function as an abstract black box. In other words, when reading or reasoning about recursive function, you should focus on the current iteration, treat the invocation of the recursive function as atomic (something you cannot explore into) assuming it can do what it is supposed to do, and see how its result can be used to solve the current iteration.
You already know the contract of your TREE-ROOT-INSERT(x, z):
insert z into a binary search tree rooted at x, transform the tree so that z will become the new root.
let's look at this snippet:
if z.key < x.key
x.left = TREE-ROOT-INSERT(x.left, z)
return RIGHT-ROTATE(x)
This says z is less than x so it goes to the left sub-tree (because it is BST). TREE-ROOT-INSERT is invoked again, but we won't follow into it. Instead we just assume it can do what it is meant to do: it will insert z to the tree rooted at x.left, and make z the new root. Then you will get a tree of the bellow structure:
x
/ \
z ...
/ \
... ...
Again, you don't know how exactly calling TREE-ROOT-INSERT(x.left, z) will get you the z-rooted sub-tree. At this moment you don't care, because the real important part is what follows: how to make this entire tree rooted at z? The answer is the RIGHT-ROTATE(x)
But in my mind analysing the code i suppose that it insert the node where it has to go and then JUST 1 TIME it rotates the tree.
How is it possible that the tree is rotated every time?
If I understand you correctly, you are still thinking how to solve the problem in a non-recursive way. It is true that you can insert z into the BST rooted at x using the standard BST insertion procedure. That will put z in the correct position. However to bring z to the root from that position, you need more than 1 rotation.
In the recursive version, rotation is required to bring z to the root after you get a z-rooted sub-tree. But to get the z-rooted sub-tree from the original x.left rooted sub-tree, you need a rotation as well. Rotation is called many times, but on different sub-trees.

Strange behavior when implementing Back propagation in DBN

Currently I'm trying to implement the Deep Belief Network. But I've met a very strange problem. My source code can be found here: https://github.com/mistree/GoDeep/blob/master/GoDeep/
I first implemented the RBM using CD and it works perfectly (by using the concurrency feature of Golang it's quite fast). Then I start to implement a normal feed forward network with back propagation and then the strange thing happens. It seems very unstable. When I run it with xor gate test it sometimes fails, only when I set the hidden layer nodes to 10 or more then it never fails. Below is how I calculate it
Step 1 : calculate all the activation with bias
Step 2 : calculate the output error
Step 3 : back propagate the error to each node
Step 4 : calculate the delta weight and bias for each node with momentum
Step 1 to Step 4 I do a full batch and sum up these delta weight and bias
Step 5 : apply the averaged delta weight and bias
I followed the tutorial here http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
And normally it works if I give it more hidden layer nodes. My test code is here https://github.com/mistree/GoDeep/blob/master/Test.go
So I think it should work and start to implement the DBN by combining the RBM and normal NN. However then the result becomes really bad. It even can't learn a xor gate in 1000 iteration. And sometimes goes totally wrong. I tried to debug with that so after the PreTrain of DBN I do a reconstruction. Most times the reconstruction looks good but the back propagation even fails when the preTrain result is perfect.
I really don't know what's wrong with the back propagation. I must misunderstood the algorithm or made some big mistakes in the implementation.
If possible please run the test code and you'll see how weird it is. The code it self is quite readable. Any hint will be great help.Thanks in advance
I remember Hinton saying you cant train RBM's on an XOR, something about the vector space that doesnt allow a two layer network to work. Deeper networks have less linear properties that allow it to work.

prevent regular expression formatting in my javascript

I'm sure someone else has asked this but my Google foo is failing me and I cannot find it.
When I divide more than once in an equation like this:
this.active[i].pos(last.pos()+(last.width()/2)+10+(this.active[i].width()/2));
"/2)+10+(this.active[i].width()/" will come up with regular expression formatting(all orange) in the editor which is driving me insane. :(
Is there a way I can change my settings to prevent this? I do not use regular expression at all, so disabling it's formatting entirely in the editor would be acceptable.
Can anyone provide, or point me towards, an answer?
If you found it on Google, I would appreciate learning your search terms.
Thank you. :)
I've been searching the web for about 45 minutes trying to find a solution to this very question when I came across this question here on stack overflow. I almost started a bounty on it but decided I'd see if I could figure it out myself.
I came up with two possible solutions, both of which are much simpler than I though they would be.
Solution 1: Separate the formula into two sections that can be stored in variables and added together when needed. For example, I happened to be writing a formula for a surface area calculation, which formed a regular expression and returned the incorrect answer:
return [(this.base * this.height)/2] + [(this.perimeter * this.slant)/2];
I split the formula at the + and stored them in variables:
var a = (this.base * this.height)/2;
var b = (this.perimeter * this.slant)/2;
return a + b
This solution worked just fine. But then I started thinking that there had to be a simpler solution I was over-looking which led me to:
Solution 2: Dividing by 2 is the same as multiplying by 0.5 (duh!) . In my case - and in almost any case - dividing by 2 and multiplying by 0.5 will get you the same answer. My code then looked like this:
return [(this.base * this.height) * 0.5] + [(this.perimeter * this.slant) * 0.5];
I tested both, and both work, though obviously solution 2 is more efficient (less code).
The only time I could imagine needing to use solution 1 is if you're dividing by a very long number or a prime number (dividing by 3 gives you a more accurate result than multiplying by 0.33).
Anyway, I know you posted this question months ago and probably either came up with a solution or moved on, but I figured I'd post this answer anyway as a reference for any future issues with the same idea.
(Also, this is in JavaScript but I can't imagine something this simple is any different in a similar language).

Resources