Space complexity of recursion using queue as parameter - recursion

space complexity = depth * memory consumed.
I have depth log(n) so in space complexity is O(NlogN). In one medium blog it says O(N). Therefore which is right?
class Solution {
public boolean isValidBST(TreeNode root) {
Queue<TreeNode> nodes = new LinkedList<>();
depth(root,nodes);
while(nodes.size()>1){
TreeNode node1= nodes.remove();
TreeNode node2= nodes.element();
if(node2.val<=node1.val){
return false;
}
}
return true;
}
private void depth(TreeNode root,Queue<TreeNode> nodes){
if(root==null){
return;
}
depth(root.left,nodes);
nodes.add(root);
depth(root.right,nodes);
}
}

Related

Breadth first traversal of arbitrary graph with minimal memory

I have an enormous directed graph I need to traverse in search for the shortest path to a specific node from a given starting point. The graph in question does not exist explicitly; the child nodes are determined algorithmically from the parent nodes.
(To give an illustration: imagine a graph of chess positions. Each node is a chess position and its children are all the legal moves from that position.)
So I have a queue for open nodes, and every time I process the next node in the queue I enqueue all of its children. But since the graph can have cycles I also need to maintain a hashset of all visited nodes so I can check if I have visited one before.
This works okay, but since this graph is so large, I run into memory problems. All of the nodes in the queue are also stored in the hashset, which tends to be around 50% of the total number or visited nodes in practice in my case.
Is there some magical way to get rid of this redundancy while keeping the speed of the hashset? (Obviously, I could get rid of the redundancy by NOT hashing and just doing a linear search, but that is out of the question.)
I solved it by writing a class that stores the keys in a list and stores the indices of the keys in a hashtable. The next node "in the queue" is always the the next node in the list until you find what you're looking for or you've traversed the entire graph.
class IndexMap<T>
{
private List<T> values;
private LinkedList<int>[] buckets;
public int Count { get; private set; } = 0;
public IndexMap(int capacity)
{
values = new List<T>(capacity);
buckets = new LinkedList<int>[NextPowerOfTwo(capacity)];
for (int i = 0; i < buckets.Length; ++i)
buckets[i] = new LinkedList<int>();
}
public void Add(T item) //assumes item is not yet in map
{
if (Count == buckets.Length)
ReHash();
int bucketIndex = item.GetHashCode() & (buckets.Length - 1);
buckets[bucketIndex].AddFirst(Count++);
values.Add(item);
}
public bool Contains(T item)
{
int bucketIndex = item.GetHashCode() & (buckets.Length - 1);
foreach(int i in buckets[bucketIndex])
{
if (values[i].Equals(item))
return true;
}
return false;
}
public T this[int index]
{
get => values[index];
}
private void ReHash()
{
LinkedList<int>[] newBuckets = new LinkedList<int>[2 * buckets.Length];
for (int i = 0; i < newBuckets.Length; ++i)
newBuckets[i] = new LinkedList<int>();
for (int i = 0; i < buckets.Length; ++i)
{
foreach (int index in buckets[i])
{
int bucketIndex = values[index].GetHashCode() & (newBuckets.Length - 1);
newBuckets[bucketIndex].AddFirst(index);
}
buckets[i] = null;
}
buckets = newBuckets;
}
private int NextPowerOfTwo(int n)
{
if ((n & n-1) == 0)
return n;
int output = 0;
while (n > output)
{
output <<= 1;
}
return output;
}
}
The old method of maintaining both an array of the open nodes and a hashtable of the visited nodes needed n*(1+a)*size(T) space, where a is the ratio of nodes_in_the_queue over total_nodes_found and size(T) is the size of a node.
This method needs n*(size(T) + size(int)). If your nodes are significantly larger than an int, this can save a lot.

Retouching several images in several Task

Generalities : explanations about my program and its functioning
I am working on a photo-retouching JavaFX application. The final user can load several images. When he clicks on the button REVERSE, a Task is launched for each image using an Executor. Each of these Task executes the reversal algorithm : it fills an ArrayBlockingQueue<Pixel> (using add method).
When the final user clicks on the button REVERSE, as I said, these Task are launched. But just after these statements, I tell the JavaFX Application Thread to draw the Pixel of the ArrayBlockingQueue<Pixel> (using remove method).
Thus, there are parallelism and concurrency (solved by the ArrayBlockingQueue<Pixel>) between the JavaFX Application Thread and the Task, and between the Task themselves.
To draw the Pixel of the ArrayBlockingQueue<Pixel>, the JavaFX Application Thread starts an AnimationTimer. The latter contains the previously-mentionned remove method. This AnimationTimer is started for each image.
I think you're wondering yourself how this AnimationTimer can know to what image belongs the Pixel it has removed ? In fact, each Pixel has an attribute writable_image that specifies the image to what it belongs.
My problems
Tell me if I'm wrong, but my program should work. Indeed :
My JavaFX Application Thread is the only thread that change the GUI (and it's required in JavaFX) : the Task just do the calculations.
There is not concurrency, thanks to the BlockingQueue I use (in particular, there isn't possibility of draining).
The AnimationTimer knows to what image belongs each Pixel.
However, it's (obviously !) not the case (otherwise I wouldn't have created this question haha !).
My problem is that my JavaFX Application freezes (first problem), after having drawn only some reversed pixels (not all the pixels). On the last loaded image moreover (third problem).
A detail that could be the problems' cause
But I would need your opinion.
The AnimationTimer of course doesn't draw the reversed pixels of each image directly : this is animated. The final user can see each pixel of an image being reversed, little by little. It's very practical in other algorithms as the creation of a circle, because the user can "look" how the algorithm works.
But to do that, the AnimationTimer needs to read a variable called max. This variable is modified (writen) in... each Task. But it's an AtomicLong. So IF I AM NOT WRONG, there isn't any problem of concurrency between the Task themselves, or between the JavaFX Application Thread and these Task.
However, it could be the problem : indeed, the max's value could be 2000 in Task n°1 (= in image n°1), and 59 in Task n°2 (= in image n°2). The problem is the AnimationTimer must use 2000 for the image n°1, and 59 for the n°2. But if the Task n°1 et n°2 have finished, the only value known by the AnimationTimer would be 59...
Sources
When the user clicks on the button REVERSE
We launch the several Task and start several times the AnimationTimer. CLASS : RightPane.java
WritableImage current_writable_image;
for(int i = 0; i < this.gui.getArrayListImageViewsImpacted().size(); i++) {
current_writable_image = (WritableImage) this.gui.getArrayListImageViewsImpacted().get(i).getImage();
this.gui.getGraphicEngine().executor.execute(this.gui.getGraphicEngine().createTask(current_writable_image));
}
for(int i = 0; i < this.gui.getArrayListImageViewsImpacted().size(); i++) {
current_writable_image = (WritableImage) this.gui.getArrayListImageViewsImpacted().get(i).getImage();
this.gui.getImageAnimation().setWritableImage(current_writable_image);
this.gui.getImageAnimation().startAnimation();
}
The Task are part of the CLASS GraphicEngine, which contains an Executor :
public final Executor executor = Executors.newCachedThreadPool(runnable -> {
Thread t = new Thread(runnable);
t.setDaemon(true);
return t ;
});
public Task createTask(WritableImage writable_image) {
int image_width = (int) writable_image.getWidth(), image_height = (int) writable_image.getHeight();
Task ret = new Task() {
protected Void call() {
switch(operation_to_do) {
case "reverse" :
gui.getImageAnimation().setMax(image_width*image_height); // USE OF "MAX" VARIABLE
reverseImg(writable_image);
break;
}
return null;
}
};
return ret;
}
The same CLASS, GraphicEngine, also contains the reversal algorithm :
private void reverseImg(WritableImage writable_image) {
int image_width = (int) writable_image.getWidth(), image_height = (int) writable_image.getHeight();
BlockingQueue<Pixel> updates = gui.getUpdates();
PixelReader pixel_reader = writable_image.getPixelReader();
double[] rgb_reversed;
for (int x = 0; x < image_width; x++) {
for (int y = 0; y < image_height; y++) {
rgb_reversed = PhotoRetouchingFormulas.reverse(pixel_reader.getColor(x, y).getRed(), pixel_reader.getColor(x, y).getGreen(), pixel_reader.getColor(x, y).getBlue());
updates.add(new Pixel(x, y, Color.color(rgb_reversed[0], rgb_reversed[1], rgb_reversed[2], pixel_reader.getColor(x, y).getOpacity()), writable_image));
}
}
}
Finally, here is the code of the CLASS AnimationTimer. There is nothing particular. Note the variable max is used here too (and in the CLASS GraphicEngine : setMax).
public class ImageAnimation extends AnimationTimer {
private Gui gui;
private AtomicLong max, speed, max_delay;
private long count, start;
private WritableImage writable_image;
ImageAnimation (Gui gui) {
this.gui = gui;
this.count = 0;
this.start = -1;
this.max = new AtomicLong(Long.MAX_VALUE);
this.max_delay = new AtomicLong(999_000_000);
this.speed = new AtomicLong(this.max_delay.get());
}
public void setMax(long max) {
this.max.set(max);
}
public void setSpeed(long speed) { this.speed.set(speed); }
public double getMaxDelay() { return this.max_delay.get(); }
#Override
public void handle(long timestamp) {
if (start < 0) {
start = timestamp ;
return ;
}
ArrayList<Pixel> list_sorted_pixels = new ArrayList<>();
BlockingQueue<Pixel> updates = this.gui.getUpdates();
for(Pixel new_pixel : updates) {
if(new_pixel.getWritableImage() == writable_image) {
list_sorted_pixels.add(new_pixel);
}
}
while (list_sorted_pixels.size() > 0 && timestamp - start > (count * this.speed.get()) / (writable_image.getWidth()) && !updates.isEmpty()) {
Pixel update = list_sorted_pixels.remove(0);
updates.remove(update);
count++;
if (update.getX() >= 0 && update.getY() >= 0) {
writable_image.getPixelWriter().setColor(update.getX(), update.getY(), update.getColor());
}
}
if (count >= max.get()) {
this.count = 0;
this.start = -1;
this.max.set(Long.MAX_VALUE);
stop();
}
}
public void setWritableImage(WritableImage writable_image) { this.writable_image = writable_image; }
public void startAnimation() {
this.start();
}
}

Directed Graph Traversal - All paths

Given a directed graph with
A root node
Some leaves nodes
Multiple nodes can be connected to the same node
Cycles can exist
We need to print all the paths from the root node to all the leaves nodes. This is the closest question I got to this problem
Find all paths between two graph nodes
If you actually care about ordering your paths from shortest path to longest path then it would be far better to use a modified A* or Dijkstra Algorithm. With a slight modification the algorithm will return as many of the possible paths as you want in order of shortest path first. So if what you really want are all possible paths ordered from shortest to longest then this is the way to go. The code I suggested above would be much slower than it needs to be if you care about ordering from shortest to longest, not to mention would take up more space then you'd want in order to store every possible path at once.
If you want an A* based implementation capable of returning all paths ordered from the shortest to the longest, the following will accomplish that. It has several advantages. First off it is efficient at sorting from shortest to longest. Also it computes each additional path only when needed, so if you stop early because you dont need every single path you save some processing time. It also reuses data for subsequent paths each time it calculates the next path so it is more efficient. Finally if you find some desired path you can abort early saving some computation time. Overall this should be the most efficient algorithm if you care about sorting by path length.
import java.util.*;
public class AstarSearch {
private final Map<Integer, Set<Neighbor>> adjacency;
private final int destination;
private final NavigableSet<Step> pending = new TreeSet<>();
public AstarSearch(Map<Integer, Set<Neighbor>> adjacency, int source, int destination) {
this.adjacency = adjacency;
this.destination = destination;
this.pending.add(new Step(source, null, 0));
}
public List<Integer> nextShortestPath() {
Step current = this.pending.pollFirst();
while( current != null) {
if( current.getId() == this.destination )
return current.generatePath();
for (Neighbor neighbor : this.adjacency.get(current.id)) {
if(!current.seen(neighbor.getId())) {
final Step nextStep = new Step(neighbor.getId(), current, current.cost + neighbor.cost + predictCost(neighbor.id, this.destination));
this.pending.add(nextStep);
}
}
current = this.pending.pollFirst();
}
return null;
}
protected int predictCost(int source, int destination) {
return 0; //Behaves identical to Dijkstra's algorithm, override to make it A*
}
private static class Step implements Comparable<Step> {
final int id;
final Step parent;
final int cost;
public Step(int id, Step parent, int cost) {
this.id = id;
this.parent = parent;
this.cost = cost;
}
public int getId() {
return id;
}
public Step getParent() {
return parent;
}
public int getCost() {
return cost;
}
public boolean seen(int node) {
if(this.id == node)
return true;
else if(parent == null)
return false;
else
return this.parent.seen(node);
}
public List<Integer> generatePath() {
final List<Integer> path;
if(this.parent != null)
path = this.parent.generatePath();
else
path = new ArrayList<>();
path.add(this.id);
return path;
}
#Override
public int compareTo(Step step) {
if(step == null)
return 1;
if( this.cost != step.cost)
return Integer.compare(this.cost, step.cost);
if( this.id != step.id )
return Integer.compare(this.id, step.id);
if( this.parent != null )
this.parent.compareTo(step.parent);
if(step.parent == null)
return 0;
return -1;
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Step step = (Step) o;
return id == step.id &&
cost == step.cost &&
Objects.equals(parent, step.parent);
}
#Override
public int hashCode() {
return Objects.hash(id, parent, cost);
}
}
/*******************************************************
* Everything below here just sets up your adjacency *
* It will just be helpful for you to be able to test *
* It isnt part of the actual A* search algorithm *
********************************************************/
private static class Neighbor {
final int id;
final int cost;
public Neighbor(int id, int cost) {
this.id = id;
this.cost = cost;
}
public int getId() {
return id;
}
public int getCost() {
return cost;
}
}
public static void main(String[] args) {
final Map<Integer, Set<Neighbor>> adjacency = createAdjacency();
final AstarSearch search = new AstarSearch(adjacency, 1, 4);
System.out.println("printing all paths from shortest to longest...");
List<Integer> path = search.nextShortestPath();
while(path != null) {
System.out.println(path);
path = search.nextShortestPath();
}
}
private static Map<Integer, Set<Neighbor>> createAdjacency() {
final Map<Integer, Set<Neighbor>> adjacency = new HashMap<>();
//This sets up the adjacencies. In this case all adjacencies have a cost of 1, but they dont need to.
addAdjacency(adjacency, 1,2,1,5,1); //{1 | 2,5}
addAdjacency(adjacency, 2,1,1,3,1,4,1,5,1); //{2 | 1,3,4,5}
addAdjacency(adjacency, 3,2,1,5,1); //{3 | 2,5}
addAdjacency(adjacency, 4,2,1); //{4 | 2}
addAdjacency(adjacency, 5,1,1,2,1,3,1); //{5 | 1,2,3}
return Collections.unmodifiableMap(adjacency);
}
private static void addAdjacency(Map<Integer, Set<Neighbor>> adjacency, int source, Integer... dests) {
if( dests.length % 2 != 0)
throw new IllegalArgumentException("dests must have an equal number of arguments, each pair is the id and cost for that traversal");
final Set<Neighbor> destinations = new HashSet<>();
for(int i = 0; i < dests.length; i+=2)
destinations.add(new Neighbor(dests[i], dests[i+1]));
adjacency.put(source, Collections.unmodifiableSet(destinations));
}
}
The output from the above code is the following:
[1, 2, 4]
[1, 5, 2, 4]
[1, 5, 3, 2, 4]
Notice that each time you call nextShortestPath() it generates the next shortest path for you on demand. It only calculates the extra steps needed and doesnt traverse any old paths twice. Moreover if you decide you dont need all the paths and end execution early you've saved yourself considerable computation time. You only compute up to the number of paths you need and no more.
Finally it should be noted that the A* and Dijkstra algorithms do have some minor limitations, though I dont think it would effect you. Namely it will not work right on a graph that has negative weights.
Here is a link to JDoodle where you can run the code yourself in the browser and see it working. You can also change around the graph to show it works on other graphs as well: http://jdoodle.com/a/ukx

Binary Tree and Return root node

..I'm building a binary tree where the root is given and the children are either root-3, root-2 or root-1 (that is, they hold those number of pennies). So 5 would have nodes of 2,3,4, and so on, until the leaves are 0. Here's my method for making such a tree. I don't understand why the method doesn't return the original node, in this case, the value should be 3.
Any guidance would be awesome.
public GameNode buildTree1(GameNode root){
int penn = root.getPennies();
if (penn < 0)
{
return null;
}
else {
root.print();
root.setLeft(buildTree1(new GameNode(penn-1)));
root.setMiddle(buildTree1(new GameNode(penn-2)));
root.setRight(buildTree1(new GameNode(penn-3)));
return root;
}
Get/Set Methods
public void setLeft(GameNode newNode) {
// TODO Auto-generated method stub
left = newNode;
}
Same for setMiddle and setRight;

Size-limited queue that holds last N elements in Java

A very simple & quick question on Java libraries: is there a ready-made class that implements a Queue with a fixed maximum size - i.e. it always allows addition of elements, but it will silently remove head elements to accomodate space for newly added elements.
Of course, it's trivial to implement it manually:
import java.util.LinkedList;
public class LimitedQueue<E> extends LinkedList<E> {
private int limit;
public LimitedQueue(int limit) {
this.limit = limit;
}
#Override
public boolean add(E o) {
super.add(o);
while (size() > limit) { super.remove(); }
return true;
}
}
As far as I see, there's no standard implementation in Java stdlibs, but may be there's one in Apache Commons or something like that?
Apache commons collections 4 has a CircularFifoQueue<> which is what you are looking for. Quoting the javadoc:
CircularFifoQueue is a first-in first-out queue with a fixed size that replaces its oldest element if full.
import java.util.Queue;
import org.apache.commons.collections4.queue.CircularFifoQueue;
Queue<Integer> fifo = new CircularFifoQueue<Integer>(2);
fifo.add(1);
fifo.add(2);
fifo.add(3);
System.out.println(fifo);
// Observe the result:
// [2, 3]
If you are using an older version of the Apache commons collections (3.x), you can use the CircularFifoBuffer which is basically the same thing without generics.
Update: updated answer following release of commons collections version 4 that supports generics.
Guava now has an EvictingQueue, a non-blocking queue which automatically evicts elements from the head of the queue when attempting to add new elements onto the queue and it is full.
import java.util.Queue;
import com.google.common.collect.EvictingQueue;
Queue<Integer> fifo = EvictingQueue.create(2);
fifo.add(1);
fifo.add(2);
fifo.add(3);
System.out.println(fifo);
// Observe the result:
// [2, 3]
I like #FractalizeR solution. But I would in addition keep and return the value from super.add(o)!
public class LimitedQueue<E> extends LinkedList<E> {
private int limit;
public LimitedQueue(int limit) {
this.limit = limit;
}
#Override
public boolean add(E o) {
boolean added = super.add(o);
while (added && size() > limit) {
super.remove();
}
return added;
}
}
Use composition not extends (yes I mean extends, as in a reference to the extends keyword in java and yes this is inheritance). Composition is superier because it completely shields your implementation, allowing you to change the implementation without impacting the users of your class.
I recommend trying something like this (I'm typing directly into this window, so buyer beware of syntax errors):
public LimitedSizeQueue implements Queue
{
private int maxSize;
private LinkedList storageArea;
public LimitedSizeQueue(final int maxSize)
{
this.maxSize = maxSize;
storageArea = new LinkedList();
}
public boolean offer(ElementType element)
{
if (storageArea.size() < maxSize)
{
storageArea.addFirst(element);
}
else
{
... remove last element;
storageArea.addFirst(element);
}
}
... the rest of this class
A better option (based on the answer by Asaf) might be to wrap the Apache Collections CircularFifoBuffer with a generic class. For example:
public LimitedSizeQueue<ElementType> implements Queue<ElementType>
{
private int maxSize;
private CircularFifoBuffer storageArea;
public LimitedSizeQueue(final int maxSize)
{
if (maxSize > 0)
{
this.maxSize = maxSize;
storateArea = new CircularFifoBuffer(maxSize);
}
else
{
throw new IllegalArgumentException("blah blah blah");
}
}
... implement the Queue interface using the CircularFifoBuffer class
}
The only thing I know that has limited space is the BlockingQueue interface (which is e.g. implemented by the ArrayBlockingQueue class) - but they do not remove the first element if filled, but instead block the put operation until space is free (removed by other thread).
To my knowledge your trivial implementation is the easiest way to get such an behaviour.
You can use a MinMaxPriorityQueue from Google Guava, from the javadoc:
A min-max priority queue can be configured with a maximum size. If so, each time the size of the queue exceeds that value, the queue automatically removes its greatest element according to its comparator (which might be the element that was just added). This is different from conventional bounded queues, which either block or reject new elements when full.
An LRUMap is another possibility, also from Apache Commons.
http://commons.apache.org/collections/apidocs/org/apache/commons/collections/map/LRUMap.html
Ok I'll share this option. This is a pretty performant option - it uses an array internally - and reuses entries. It's thread safe - and you can retrieve the contents as a List.
static class FixedSizeCircularReference<T> {
T[] entries
FixedSizeCircularReference(int size) {
this.entries = new Object[size] as T[]
this.size = size
}
int cur = 0
int size
synchronized void add(T entry) {
entries[cur++] = entry
if (cur >= size) {
cur = 0
}
}
List<T> asList() {
int c = cur
int s = size
T[] e = entries.collect() as T[]
List<T> list = new ArrayList<>()
int oldest = (c == s - 1) ? 0 : c
for (int i = 0; i < e.length; i++) {
def entry = e[oldest + i < s ? oldest + i : oldest + i - s]
if (entry) list.add(entry)
}
return list
}
}
public class ArrayLimitedQueue<E> extends ArrayDeque<E> {
private int limit;
public ArrayLimitedQueue(int limit) {
super(limit + 1);
this.limit = limit;
}
#Override
public boolean add(E o) {
boolean added = super.add(o);
while (added && size() > limit) {
super.remove();
}
return added;
}
#Override
public void addLast(E e) {
super.addLast(e);
while (size() > limit) {
super.removeLast();
}
}
#Override
public boolean offerLast(E e) {
boolean added = super.offerLast(e);
while (added && size() > limit) {
super.pollLast();
}
return added;
}
}

Resources