maximal Clique recursive - recursion

I have a problem that I'm trying to solve, I want to implement a method that takes a clique and returns the largest clique that contains that clique. The method I'm working on is recursive and uses backtracking to accept or refuse solutions according to the clique definition. My problem is I don't want to use Bron-Kerbosch algorithm as I want only one parameter to be passed to the method.
Here's a pesudo code for what I did:
public ArrayList<Integer> findClique(ArrayList<Integer> R)
{
if(no more candidates)
{
return R;
}
else
{
for(int candidate = next candidate; candidate <nodesNmuber; node++)
if(connected(R,candidate))
{
R.add(candidate);
findClique(R);
}
printOutput(R);
R.remove(candidate);
}
}
Can you help me with ideas on how to choose the condition that breaks the recursion? I don't know how to keep the value of next candidate until the next loop without passing it in the method parameter !

Related

How do I mutate and optionally remove elements from a vec without memory allocation?

I have a Player struct that contains a vec of Effect instances. I want to iterate over this vec, decrease the remaining time for each Effect, and then remove any effects whose remaining time reaches zero. So far so good. However, for any effect removed, I also want to pass it to Player's undo_effect() method, before destroying the effect instance.
This is part of a game loop, so I want to do this without any additional memory allocation if possible.
I've tried using a simple for loop and also iterators, drain, retain, and filter, but I keep running into issues where self (the Player) would be mutably borrowed more than once, because modifying self.effects requires a mutable borrow, as does the undo_effect() method. The drain_filter() in nightly looks useful here but it was first proposed in 2017 so not holding my breath on that one.
One approach that did compile (see below), was to use two vectors and alternate between them on each frame. Elements are pop()'ed from vec 1 and either push()'ed to vec 2 or passed to undo_effect() as appropriate. On the next game loop iteration, the direction is reversed. Since each vec will not shrink, the only allocations will be if they grow larger than before.
I started abstracting this as its own struct but want to check if there is a better (or easier) way.
This one won't compile. The self.undo_effect() call would borrow self as mutable twice.
struct Player {
effects: Vec<Effect>
}
impl Player {
fn update(&mut self, delta_time: f32) {
for effect in &mut self.effects {
effect.remaining -= delta_time;
if effect.remaining <= 0.0 {
effect.active = false;
}
}
for effect in self.effects.iter_mut().filter(|e| !e.active) {
self.undo_effect(effect);
}
self.effects.retain(|e| e.active);
}
}
The below compiles ok - but is there a better way?
struct Player {
effects: [Vec<Effect>; 2],
index: usize
}
impl Player {
fn update(&mut self, delta_time: f32) {
let src_index = self.index;
let target_index = if self.index == 0 { 1 } else { 0 };
self.effects[target_index].clear(); // should be unnecessary.
while !self.effects[src_index].is_empty() {
if let Some(x) = self.effects[src_index].pop() {
if x.active {
self.effects[target_index].push(x);
} else {
self.undo_effect(&x);
}
}
}
self.index = target_index;
}
}
Is there an iterator version that works without unnecessary memory allocations? I'd be ok with allocating memory only for the removed elements, since this will be much rarer.
Would an iterator be more efficient than the pop()/push() version?
EDIT 2020-02-23:
I ended up coming back to this and I found a slightly more robust solution, similar to the above but without the danger of requiring a target_index field.
std::mem::swap(&mut self.effects, &mut self.effects_cache);
self.effects.clear();
while !self.effects_cache.is_empty() {
if let Some(x) = self.effects_cache.pop() {
if x.active {
self.effects.push(x);
} else {
self.undo_effect(&x);
}
}
}
Since self.effects_cache is unused outside this method and does not require self.effects_cache to have any particular value beforehand, the rest of the code can simply use self.effects and it will always be current.
The main issue is that you are borrowing a field (effects) of Player and trying to call undo_effect while this field is borrowed. As you noted, this does not work.
You already realized that you could juggle two vectors, but you could actually only juggle one (permanent) vector:
struct Player {
effects: Vec<Effect>
}
impl Player {
fn update(&mut self, delta_time: f32) {
for effect in &mut self.effects {
effect.remaining -= delta_time;
if effect.remaining <= 0.0 {
effect.active = false;
}
}
// Temporarily remove effects from Player.
let mut effects = std::mem::replace(&mut self.effects, vec!());
// Call Player::undo_effects (no outstanding borrows).
// `drain_filter` could also be used, for better efficiency.
for effect in effects.iter_mut().filter(|e| !e.active) {
self.undo_effect(effect);
}
// Restore effects
self.effects = effects;
self.effects.retain(|e| e.active);
}
}
This will not allocate because the default constructor of Vec does not allocate.
On the other hand, the double-vector solution might be more efficient as it allows a single pass over self.effects rather than two. YMMV.
If I understand you correctly, you have two questions:
How can I split a Vec into two Vecs (one which fulfill a predidate, the other one which doesn't)
Is it possible to do without memory overhead
There are multiple ways of splitting a Vec into two (or more).
You could use Iteratator::partition which will give you two distinct Iterators which can be used further.
There is the unstable Vec::drain_filter function which does the same but on a Vec itself
Use splitn (or splitn_mut) which will split your Vec/slice into n (2 in your case) Iterators
Depending on what you want to do, all solutions are applicable and good to use.
Is it possible without memory overhead? Not with the solutions above, because you need to create a second Vec which can hold the filtered items. But there is a solution, namely you can "sort" the Vec where the first half will contain all the items that fulfill the predicate (e.g. are not expired) and the second half that will fail the predicate (are expired). You just need to count the amount of items that fulfill the predicate.
Then you can use split_at (or split_at_mut) to split the Vec/slice into two distinct slices. Afterwards you can resize the Vec to the length of the good items and the other ones will be dropped.
The best answer is this one in C++.
[O]rder the indices vector, create two iterators into the data vector, one for reading and one for writing. Initialize the writing iterator to the first element to be removed, and the reading iterator to one beyond that one. Then in each step of the loop increment the iterators to the next value (writing) and next value not to be skipped (reading) and copy/move the elements. At the end of the loop call erase to discard the elements beyond the last written to position.
The Rust adaptation to your specific problem is to move the removed items out of the vector instead of just writing over them.
An alternative is to use a linked list instead of a vector to hold your Effect instances.

How to "leap of faith" when using recursion?

For me when making a recursive method. I always need to spend a lot of time to do it, because I will make some test cases and to see whether my recursive case works and to draw a stack diagram. However, when I ask other about it, they just say that I need to believe myself it will work. How am I suppose to believe that if you don't know what is going on in the recursive case?
You define what is going on in the recursive case, just as you define the rest of the method. Imagine someone else wrote a method to do what the one you are writing does; you wouldn't have a problem calling that, would you? The only difference is that you are that method's author, and it just happens to be the one being written.
For example: I am writing the following method:
// Sort array a[i..j-1] in ascending order
method sort_array( a, i, j ) {
..
}
The base case is easy:
if ( i >= j-1 ) // there is at most one element to be sorted
return; // a[i..j-1] is already sorted
Now, when that isn't true, I could do the following:
else {
k = index_of_max( a, i, j );
swap( a, j-1, k );
At this point, I know that a[j-1] has the correct value, so I just need to sort what comes before it -- fortunately, I have a method to do just that:
sort_array( a, i, j-1 );
}
No leap of faith is required; I know that recursive call will work because I wrote the method to do just that.

Making cyclic graphs in F#. Is mutability required?

I'm trying to do a cyclic graph in F#
My node type looks something like this:
type Node = { Value : int; Edges : Node list }
My question is: Do I need to make Edges mutable in order to have cycles?
F# makes it possible to create immediate recursive object references with cycles, but this really only works on (fairly simple) records. So, if you try this on your definition it won't work:
let rec loop =
{ Value = 0;
Edges = [loop] }
However, you can still avoid mutation - one reasonable alternative is to use lazy values:
type Node = { Value : int; Edges : Lazy<Node list>}
This way, you are giving the compiler "enough time" to create a loop value before it needs to evaluate the edges (and access the loop value again):
let rec loop =
{ Value = 0;
Edges = lazy [loop] }
In practice, you'll probably want to call some functions to create the edges, but that should work too. You should be able to write e.g. Edges = lazy (someFancyFunction loop).
Alternatively, you could also use seq<Edges> (as sequences are lazy by default), but that would re-evaluate the edges every time, so you probably don't want to do that.

Backtracking recursively with multiple solutions

function BACKTRACKING-SEARCH(csp) returns a solution, or failure
return RECURSIVE- BACKTRACKING({ }, csp)
function RECURSIVE-BACKTRACKING(assignment,csp) returns a solution, or failure
if assignment is complete then
return assignment
var ←SELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp)
for each value in ORDER-DOMAIN-VALUES(var,assignment,csp) do
if value is consistent with assignment according to CONSTRAINTS[csp] then
add {var = value} to assignment
result ← RECURSIVE-BACKTRACKING(assignment, csp)
if result ̸= failure then
return result
remove {var = value} from assignment
return failure
This is a backtracking recursion algorythm pseudocode from AIMA. However, I don't understand if it returns ALL possible solutions or just first one found. In case it is the last option, could you please help me modify it to return a list of possible solutions instead (or at least updating some global list).
EDIT: I implemented this algorithm in Java. However, there is one problem:
if I don't return assignment, but save it in result instead, the recursion stop condition fails (i.e. it doesn't exist anymore). How can I implement another stop-condition? Maybe I should return true in the end?
Here is my code :
/**
* The actual backtracking. Unfortunately, I don't have time to implement LCV or MCV,
* therefore it will be just ordinary variable-by-variable search.
* #param line
* #param onePossibleSituation
* #param result
*/
public static boolean recursiveBacktrack(Line line, ArrayList<Integer> onePossibleSituation, ArrayList<ArrayList<Integer>> result){
if (onePossibleSituation.size() == line.getNumOfVars()){
// instead of return(assignment)
ArrayList<Integer> situationCopy = new ArrayList<Integer>();
situationCopy.addAll(onePossibleSituation);
result.add(situationCopy);
onePossibleSituation.clear();
}
Block variableToAssign = null;
// iterate through all variables and choose one unassigned
for(int i = 0; i < line.getNumOfVars(); i++){
if(!line.getCspMiniTaskVariables().get(i).isAssigned()){
variableToAssign = line.getCspMiniTaskVariables().get(i);
break;
}
}
// for each domain value for given block
for (int i = line.getCspMiniTaskDomains().get(variableToAssign.getID())[0];
i <= line.getCspMiniTaskDomains().get(variableToAssign.getID())[0]; i++){
if(!areThereConflicts(line, onePossibleSituation)){
//complete the assignment
variableToAssign.setStartPositionTemporary(i);
variableToAssign.setAssigned(true);
onePossibleSituation.add(i);
//do backtracking
boolean isPossibleToPlaceIt = recursiveBacktrack(line,onePossibleSituation,result);
if(!isPossibleToPlaceIt){
return(false);
}
}
// unassign
variableToAssign.setStartPositionTemporary(-1);
variableToAssign.setAssigned(false);
onePossibleSituation.remove(i);
}
// end of backtracking
return(false);
}
This code checks if solution found and if it is, returns the solution. Otherwise, continue backtracking. That means, it returns the first solution found.
if result ̸= failure then
return result
remove {var = value} from assignment
You can modify it like that:
if result ̸= failure then
PRINT result // do not return, just save the result
remove {var = value} from assignment
Or, better, modify this part:
if assignment is complete then
print assignment
return assignment // print it and return
About edited question:
First, return true in the first if, so recursion will know that it found a solution. The second step, there is a mistake, probably:
if(!isPossibleToPlaceIt){
return(false);
}
Should be
if(isPossibleToPlaceIt){
return(true);
}
Because if your backtracking has found something, it returns true, which means you don't have to check anything else any longer.
EDIT#2: If you want to continue backtracking to find ALL solutions, just remove the whole previous if section with return:
//if(isPossibleToPlaceIt){
// return(true);
//}
So we will continue the search in any way.

Identifying a recursive function

As I know, a recursive function is a function which calls it self, and it has the characteristic of having a base case. This is a function for pre-order traversal of a binary tree. Is this a recursive function? Absence of the base case confuses me.
void pre_order(struct node* current){ // preorder traversal
printf("%d\n",current->data);
if(current->left != NULL){
pre_order(current->left);
}
if(current->right !=NULL){
pre_order(current->right);
}
}
Since it calls itself it is a recursive function. That's how simple it is. There's also a base case here, but it's a little hidden perhaps. When we get to a leaf in this binary tree both left and right childs will be equal to null and therefore no more recursive calls will happen. That's our base case that's a little hidden.

Resources