Let's say we have a robot that is on a Chess Board and can move like a King can.
The board is with coords from [1,1] to [8,8].
The starting position is [1,1] the final is [8,8]. There is a list X that contains lists of obstacle's coords for example [[1,4],[2,5],[5,6]]. The problem is: is there a possible way for the robot to move from the starting to the final position.
I made this predicates:
path([A,B],_):- A is 8, B is 8.
path([A,B],X):-
possibleMoves(A,B,L), % possibleMoves returns a list of all the possible coords that
the robot can go to. (Example for [1,1] are [[0,1],[0,0],[2,1]...])
member([Ex,Ey],L), % member generates all the possible members from the list L
not(member([Ex,Ey],X)), % if this member is not member of the "forbidden ones"
Ex<9,Ex>0,Ey<9,Ey>0, % and its coords are on the board
path([Ex,Ey],X).
isTherePath(X):- possibleMoves(1,1,L),
member(E,L),
path(E,X).
But there is a mistake and it does not return any value. The recursion never stops I can't find why.
Define in one predicate, what a valid step is - including all restrictions you have. Stick to this naming convention: X0 the "first" value, X the "last" value
step(Forbidden,[X0,Y0],[X,Y]) :-
possibleMoves(X0,Y0,L), % rather: possibleMove([X0,Y0],L),
member([X,Y],L),
between(1,8,X),
between(1,8,Y),
non_member([X,Y],Forbidden).
Now you have a path with:
..., closure0(step(Forbidden), S0,S), ...
closure0/3 takes care of cycles.
Related
Given a DAG and a function which maps every vertex to a unique number from 1 to , I need to write a pseudo code for an algorithm that finds for every the minimal value of , among all vertices u that are reachable from v, and save it a an attribute of v. The time complexity of the algorithm needs to be (assuming that time complexity of is ).
I thought about using DFS (or a variation of it) and/or topological sort, but I don't know how to use it in order to solve this problem.
In addition, I need to think about an algorithm that gets an undirected graph and the function , and calculate the same thing for every vertex, and I don't know how to do it either.
Your idea of using DFS is right. Actually the function f(v) is only given for saying that each node can be uniquely identified from a number between 1 and |V|.
Just a hint for solving, you would have to modify DFS so that it returns the minimum value of f(v) of the vertex v reachable from this node and save it another array, let's say minReach and the index would be given by the function f(v). The visited array vis would similarly be identified using f(v).
I am also giving the pseudocode if you are not able to solve but try on your own first.
The pseudocode is similar to python and assuming the graph and function f(v) are available. And 0-based indexing is assumed.
vis=[0, 0, 0, .. |V| times] # visited array for dfs
minReach= [1, 2, 3, .. |V| times] # array for storing no. reachable nodes from a node
function dfs(node):
vis[f(node)-1]=1 ### mark unvisited node as visited
for v in graph[node]:
if vis[v]!=1: ## check whether adjacent node is visited
minReach[f(node)-1]=min(minReach[f(node)-1],dfs(v) ## if not visited apply dfs again
else:
minReach[f(node)-1]=min(minReach[f(node)-1],countReach[f(v)-1]) ## else store the minimum node that can be reached from this node.
return countReach[f(node)-1]
for vertex in graph: ### each vertex is checked in graph
if vis[f(vertex)-1]!=1: ### if vertex is not visited dfs is applied
dfs(vertex)
I want to understand recursion.
I understand stupid example with math but i m not sure to know the essence of it.
I have 1 example that i don t understand how it works:
TREE-ROOT-INSERT(x, z)
if x = NIL
return z
if z.key < x.key
x.left = TREE-ROOT-INSERT(x.left, z)
return RIGHT-ROTATE(x)
else
x.right = TREE-ROOT-INSERT(x.right, z)
return LEFT-ROTATE(x)
I know what this code does:
First insert a node in a BST and then rotate each time so the new node became the root.
But in my mind analysing the code i suppose that it insert the node where it has to go and then JUST 1 TIME it rotates the tree.
How is it possible that the tree is rotated every time?
You need to maintain your place in the recursive call for each level of the tree. When you hit return RIGHT-ROTATE (or left) for the first time, you're not completely done; you take the tree that is the result of the ROTATE function, and place it in the code where the recursive TREE-ROOT-INSERT call was one level higher in the stack. You then rotate again, and return the current tree one level higher up in the stack, until you've hit the original root of the tree.
What is important for understanding recursion is to think of the recursive function as an abstract black box. In other words, when reading or reasoning about recursive function, you should focus on the current iteration, treat the invocation of the recursive function as atomic (something you cannot explore into) assuming it can do what it is supposed to do, and see how its result can be used to solve the current iteration.
You already know the contract of your TREE-ROOT-INSERT(x, z):
insert z into a binary search tree rooted at x, transform the tree so that z will become the new root.
let's look at this snippet:
if z.key < x.key
x.left = TREE-ROOT-INSERT(x.left, z)
return RIGHT-ROTATE(x)
This says z is less than x so it goes to the left sub-tree (because it is BST). TREE-ROOT-INSERT is invoked again, but we won't follow into it. Instead we just assume it can do what it is meant to do: it will insert z to the tree rooted at x.left, and make z the new root. Then you will get a tree of the bellow structure:
x
/ \
z ...
/ \
... ...
Again, you don't know how exactly calling TREE-ROOT-INSERT(x.left, z) will get you the z-rooted sub-tree. At this moment you don't care, because the real important part is what follows: how to make this entire tree rooted at z? The answer is the RIGHT-ROTATE(x)
But in my mind analysing the code i suppose that it insert the node where it has to go and then JUST 1 TIME it rotates the tree.
How is it possible that the tree is rotated every time?
If I understand you correctly, you are still thinking how to solve the problem in a non-recursive way. It is true that you can insert z into the BST rooted at x using the standard BST insertion procedure. That will put z in the correct position. However to bring z to the root from that position, you need more than 1 rotation.
In the recursive version, rotation is required to bring z to the root after you get a z-rooted sub-tree. But to get the z-rooted sub-tree from the original x.left rooted sub-tree, you need a rotation as well. Rotation is called many times, but on different sub-trees.
I was reading the description of Tarjan's algorithm for finding the strongly connected components in a driected graph.
But I find it hard to understand these codes snippet:
if (w.index is undefined) then
// Successor w has not yet been visited; recurse on it
strongconnect(w)
v.lowlink := min(v.lowlink, w.lowlink)
else if (w is in S) then
// Successor w is in stack S and hence in the current SCC
v.lowlink := min(v.lowlink, w.index)
end if
the fourth and the seventh lines are different, this make me confused.
And in my opinion,the seveth line could write as the same way with the fourth line
v.lowkink := min(v.lowlink, w.index)
I test this in my program and it works fine, and for me, it's better to understand bcz verdex v cloud reach hight up root, but i couldn't prove itT_T.
I wrote a program that enumerated all graphs of size 4, then run each version (with either min(v.lowlink, w.index) or min(v.lowlink, w.lowlink) if w is in S) and compared the results. Both were exactly identical in all cases, even though w.lowlink and w.index were often different.
The reason why we can use w.index is this: consider where on the stack S relative to the current node v the other node w is.
If it's earlier on the stack then it has a smaller index than the current node (because it was visited earlier, duh), so the current node is not the "head" of its connected component and that would be reflected in v.lowlink <= w.index < v.index anyway. And it's not like w.lowlink has any particular meaning at this point either, it's in the progress of being computed and doesn't necessarily have its final value yet.
Now, if w is later in the stack than v, then the crucial property that the algorithm depends on is that then w is a descendant of v, not some sibling/cousing node still left there from an earlier recursive call. Or, as it is usually stated in a complete proof, strongly connected components never span several unconnected branches of our search tree (forest). Because since it's an SCC, there must be a path from w to v, and since we are enumerating stuff in a depth-first order, we must have visited v using that path from w before we have finished processing w, so w should be earlier in the stack than v!
And if w is a descendant of v then we already got its actual lowlink value the first time we visited it and are not interested in it any more.
On a side note, it's trivial to get rid of the lowlink property on nodes and make strongconnect return it directly. Then we wouldn't be tempted to check it instead of w.index in the second case =)
Given an undirected cyclic graph, I want to find all possible traversals with Breadth-First search or Depth-First search. That is given a graph as an adjacency-list:
A-BC
B-A
C-ADE
D-C
E-C
So all BFS paths from root A would be:
{ABCDE,ABCED,ACBDE,ACBED}
and for DFS:
{ABCDE,ABCED,ACDEB,ACEDB}
How would I generate those traversals algorithmically in a meaningful way? I suppose one could generate all permutations of letters and check their validity, but that seems like last-resort to me.
Any help would be appreciated.
Apart from the obvious way where you actually perform all possible DFS and BFS traversals you could try this approach:
Step 1.
In a dfs traversal starting from the root A transform the adjacency list of the currently visited node like so: First remove the parent of the node from the list. Second generate all permutations of the remaining nodes in the adj list.
So if you are at node C having come from node A you will do:
C -> ADE transform into C -> DE transform into C -> [DE, ED]
Step 2.
After step 1 you have the following transformed adj list:
A -> [CB, BC]
B -> []
C -> [DE, ED]
D -> []
E -> []
Now you launch a processing starting from (A,0), where the first item in the pair is the traversal path and the second is an index. Lets assume we have two queues. A BFS queue and a DFS queue. We put this pair into both queues.
Now we repeat the following, first for one queue until it is empty and then for the other queue.
We pop the first pair off the queue. We get (A,0). The node A maps to [BC, CB]. So we generate two new paths (ACB,1) and (ABC,1). Put these new paths in the queue.
Take the first one of these off the queue to get (ACB,1). The index is 1 so we look at the second character in the path string. This is C. Node C maps to [DE, ED].
The BFS children of this path would be (ACBDE,2) and (ACBED,2) which we obtained by appending the child permutation.
The DFS children of this path would be (ACDEB,2) and (ACEDB,2) which we obtained by inserting the child permutation right after C into the path string.
We generate the new paths according to which queue we are working on, based on the above and put them in the queue. So if we are working on the BFS queue we put in (ACBDE,2) and (ACBED,2). The contents of our queue are now : (ABC,1) , (ACBDE,2), (ACBED,2).
We pop (ABC,1) off the queue. Generate (ABC,2) since B has no children. And get the queue :
(ACBDE,2), (ACBED,2), (ABC,2) and so on. At some point we will end up with a bunch of pairs where the index is not contained in the path. For example if we get (ACBED,5) we know this is a finished path.
BFS is should be quite simple: each node has a certain depth at which it will be found. In your example you find A at depth 0, B and C at depth 1 and E and D at depth 2. In each BFS path, you will have the element with depth 0 (A) as the first element, followed by any permutation of the elements at depth 1 (B and C), followed by any permutation of the elements at depth 2 (E and D), etc...
If you look at your example, your 4 BFS paths match that pattern. A is always the first element, followed by BC or CB, followed by DE or ED. You can generalize this for graphs with nodes at deeper depths.
To find that, all you need is 1 Dijkstra search which is quite cheap.
In DFS, you don't have the nice separation by depth which makes BFS straightforward. I don't immediately see an algorithm that is as efficient as the one above. You could set up a graph structure and build up your paths by traversing your graph and backtracking. There are some cases in which this would not be very efficient but it might be enough for your application.
Suppose we have the directed, weighted graph. Our task is to find all paths beetween two vertices (source and destination) which cost is less or equal =< N. We visit every vertex only once. In later version I'd like to add a condition that the source can be the destination (we just make a loop).
I think it can be done with modified Dijkstra's algorithm, but I have no idea how implement such thing. Thanks for any help.
You could use recursive backtracking to solve this problem. Terminate your recursion when:
You get to the destination
You visit a node that was already visited
Your path length exceeds N.
Pseudocode:
list curpath := {}
int dest, maxlen
def findPaths (curNode, dist):
if curNode = dest:
print curpath
return
if curNode is marked:
return
if dist > maxlen:
return
add curNode to curpath
mark curNode
for nextNode, edgeDist adjacent to curNode:
findPaths(nextNode, dist + edgeDist)
remove last element of curpath
You want to find all the paths from point A to point B in a directed graph, such as the distance from A to B is smaller than N, and allowing the possibility that A = B.
Dijkstra's algorithm is taylored to find the smallest path from one point to another in a graph, and drops many all the others along the way, so to speak. Because of this, it cannot be used to find all the paths, if we include paths which overlaps.
You can achieve your goal by doing a breadth first search in the graph, keeping each branch of the covering tree in its on stack (you will get an enormous amount of them if the nodes are very well connected), and stop at depth N. All the branches which have reached B are kept aside. Once depth N has been covered, you drop all the paths which didn't reach B. The remaining ones, as well as the one kept aside put together becomes your solutions.
You may choose to add the restriction of not having cycles in your paths, in which case you would have check at each step of the search if the newly reached node is already in the path covered so far, and prune that path if it is the case.
Here is some pseudo code:
function find_paths(graph G, node A):
list<path> L, L';
L := empty list;
push path(A) in L;
for i = 2 to N begin
L' := empty list;
for each path P in L begin
if last node of P = B then push P in L'
else
for each successor S of last node in P begin
if S not in P then
path P' := P;
push S in P';
push P' in L';
endif
end
endif
end
L := L';
end
for each path P in L begin
if last node of P != B
then remove P from L
endif
end
return L;
I think a possible improvement (depending on the size of the problem and the maximum cost N) to the recursive backtracking algorithm suggested by jma127 would be to pre-compute the minimum distance of each node from the destination (shortest path tree), then append the following to the conditions tested to terminate your recursion:
You get to the a node whose minimum distance from the destination is greater than the maximum cost N minus the distance travelled to reach the current node.
If one needs to run the algorithm several times for different sources and destinations, one could run, e.g., Johnson's algorithm at the beginning to create a matrix of the shortest paths between all pairs of nodes.