I have these Circles:
I want to get the list of all possible solution of maximum non-intersecting circles. This is the illustration of the solution I wanted from node A.
Therefore the possible solutions from node A:
1 = [A,B,C], 2 = [A,B,E], 3 = [A,C,B], 4 = [A,E,B] ..etc
I want to store all of the possibilities into a list, which the will be used for weighting and selecting the best result. However, I'm still trying to create the list of all possibilities.
I've tried to code the structure here, however I still confused about backtracking and recursive. Anyone could help here?
# List of circle
# List of circle
list_of_circle = ['A','B','C','D','E']
# List of all possible solutions
result = []
# List of possible nodes
ways = []
for k in list_of_circle:
if len(list_of_circle)==0:
result.append(ways)
else:
ways.append[k]
list_of_circle.remove(k)
for j in list_of_circle:
if k.intersects(j):
list_of_circle.remove(j)
return result
Here is a possible solution (pseudocode).
def get_max_non_intersect(selected_circles, current_circle_idx, all_circles):
if current_circle_idx == len(all_circles): # final case
return selected_circles
# we recursively get the biggest selection of circles if the current circle is not selected
list_without_current_circle = get_max_non_intersect(selected_circles, current_circle_idx + 1, all_circles)
# now we check if we can add the current circle to the ones selected
current_intersects_selected = false
current_circle = all_circles[current_circle_idx]
for selected_circle in selected_circles:
if intersects(current_circle, selected_circle):
current_intersects_selected = true
break
if current_intersects_selected is true: # we cannot add the current circle
return list_without_current_circle
else: # we can add the current circle
list_with_current_circle = get_max_non_intersect(selected_circles + [current_circle], current_circle_idx + 1, all_circles)
return list_with_current_circle + list_without_current_circle
Related
I stumbled upon the below question and was unable to solve it, can someone tell whats the approach here
There's no way to do this except by brute force, recursively. For each tile that's not in the right position, there are at most 4 possible swaps to make. You make a swap, then add this new position to the list of ones you haven't tried, making sure not to go back to any position you've seen before. Track the depth of the recursion, and when you get a final position, the depth is the answer.
incoming = (7,3,2,4,1,5,6,8,9)
answer = (1,2,3,4,5,6,7,8,9)
primes = (2,3,5,7,11,13,17)
def solve(base, depth):
seen = set()
untried = [(base,0)]
while untried:
array,depth = untried.pop(0)
print(depth, array)
if array == answer:
print( "ANSWER!", depth )
return depth
if array in seen:
print("seen")
continue
seen.add( array )
for n in range(9):
if array[n] == n+1:
continue
for dx in (-1, -3, 1, 3):
if 0 <= n+dx < 9 and array[n]+array[n+dx] in primes:
# attempt a swap.
a = list(array)
a[n],a[n+dx] = a[n+dx],a[n]
untried.append((tuple(a),depth+1))
print( "fail" )
return -1
solve( incoming, 0 )
I am using the shortest path algorithm from LightGraphs.jl. In the end I want to collect some information about the nodes along the path. In order to do that I need to be able to extract the vertices from the edges that the function gives back.
Using LightGraphs
g = cycle_graph(4)
path = a_star(g, 1, 3)
edge1 = path[1]
Using this I get: Edge 1 => 2
How would I automatically get the vertices 1, 2 without having to look at the Edge manually? I thinking about some thing like edge1[1] or edge1.From which both does not work.
Thanks in advance!
The accessors for AbstractEdge classes are src and dst, used like this:
using LightGraphs
g = cycle_graph(4)
path = a_star(g, 1, 3)
edge1 = path[1]
s = src(edge1)
d = dst(edge1)
println("source: $s") # prints "source: 1"
println("destination: $d") # prints "destination: 2"
I have a graph with an adjacency matrix shape (adj_mat.shape = (4000, 4000)). My current problem involves finding the list of path lengths (the sequence of nodes is not so important) that traverses from the source (row = 0 ) to the target (col = trans_mat.shape[0] -1).
I am not interested in finding the path sequences; I am only interested in propagating the path length. As a result, this is different from finding all simple paths - which would be too slow (ie. find all paths from source to target; then score each path). Is there a performant way to do this quickly?
DFS is suggested as one possible strategy (noted here). My current implementation (below) is simply not optimal:
# create graph
G = nx.from_numpy_matrix(adj_mat, create_using=nx.DiGraph())
# initialize nodes
for node in G.nodes:
G.nodes[node]['cprob'] = []
# set starting node value
G.nodes[0]['cprob'] = [0]
def propagate_prob(G, node):
# find incoming edges to node
predecessors = list(G.predecessors(node))
curr_node_arr = []
for prev_node in predecessors:
# get incoming edge weight
edge_weight = G.get_edge_data(prev_node, node)['weight']
# get predecessor node value
if len(G.nodes[prev_node]['cprob']) == 0:
G.nodes[prev_node]['cprob'] = propagate_prob(G, prev_node)
prev_node_arr = G.nodes[prev_node]['cprob']
# add incoming edge weight to prev_node arr
curr_node_arr = np.concatenate([curr_node_arr, np.array(edge_weight) + np.array(prev_node_arr)])
# update current node array
G.nodes[node]['cprob'] = curr_node_arr
return G.nodes[node]['cprob']
# calculate all path lengths from source to sink
part_func = propagate_prob(G, 4000)
I don't have a large example by hand (e.g. >300 nodes), but I found a non recursive solution:
import networkx as nx
g = nx.DiGraph()
nx.add_path(g, range(7))
g.add_edge(0, 3)
g.add_edge(0, 5)
g.add_edge(1, 4)
g.add_edge(3, 6)
# first step retrieve topological sorting
sorted_nodes = nx.algorithms.topological_sort(g)
start = 0
target = 6
path_lengths = {start: [0]}
for node in sorted_nodes:
if node == target:
print(path_lengths[node])
break
if node not in path_lengths or g.out_degree(node) == 0:
continue
new_path_length = path_lengths[node]
new_path_length = [i + 1 for i in new_path_length]
for successor in g.successors(node):
if successor in path_lengths:
path_lengths[successor].extend(new_path_length)
else:
path_lengths[successor] = new_path_length.copy()
if node != target:
del path_lengths[node]
Output: [2, 4, 2, 4, 4, 6]
If you are only interested in the number of paths with different length, e.g. {2:2, 4:3, 6:1} for above example, you could even reduce the lists to dicts.
Background
Some explanation what I'm doing (and I hope works for larger examples as well). First step is to retrieve the topological sorting. Why? Then I know in which "direction" the edges flow and I can simply process the nodes in that order without "missing any edge" or any "backtracking" like in a recursive variant. Afterwards, I initialise the start node with a list containing the current path length ([0]). This list is copied to all successors, while updating the path length (all elements +1). The goal is that in each iteration the path length from the starting node to all processed nodes is calculated and stored in the dict path_lengths. The loop stops after reaching the target-node.
With igraph I can calculate up to 300 nodes in ~ 1 second. I also found that accessing the adjacency matrix itself (rather than calling functions of igraph to retrieve edges/vertices) also saves time. The two key bottlenecks are 1) appending a long list in an efficient manner (while also keeping memory) 2) finding a way to parallelize. This time grows exponentially past ~300 nodes, I would love to see if someone has a faster solution (while also fitting into memory).
import igraph
# create graph from adjacency matrix
G = igraph.Graph.Adjacency((trans_mat_pad > 0).tolist())
# add edge weights
G.es['weight'] = trans_mat_pad[trans_mat_pad.nonzero()]
# initialize nodes
for node in range(trans_mat_pad.shape[0]):
G.vs[node]['cprob'] = []
# set starting node value
G.vs[0]['cprob'] = [0]
def propagate_prob(G, node, trans_mat_pad):
# find incoming edges to node
predecessors = trans_mat_pad[:, node].nonzero()[0] # G.get_adjlist(mode='IN')[node]
curr_node_arr = []
for prev_node in predecessors:
# get incoming edge weight
edge_weight = trans_mat_pad[prev_node, node] # G.es[prev_node]['weight']
# get predecessor node value
if len(G.vs[prev_node]['cprob']) == 0:
curr_node_arr = np.concatenate([curr_node_arr, np.array(edge_weight) + propagate_prob(G, prev_node, trans_mat_pad)])
else:
curr_node_arr = np.concatenate([curr_node_arr, np.array(edge_weight) + np.array(G.vs[prev_node]['cprob'])])
## NB: If memory constraint, uncomment below
# set max size
# if len(curr_node_arr) > 100:
# curr_node_arr = np.sort(curr_node_arr)[:100]
# update current node array
G.vs[node]['cprob'] = curr_node_arr
return G.vs[node]['cprob']
# calculate path lengths
path_len = propagate_prob(G, trans_mat_pad.shape[0]-1, trans_mat_pad)
I am working on an online challenge problem, and I can solve this problem with brute force, but when the length became very large, the runtime is significantly increased, I believe there must be a better algorithm to solve this problem, but it is just out of my hand. I appreciate any brilliant ideas.
If you are allowed to use numpy, by using numpy.cumsum method you can find store sum(A[:i]), sum(A[i:]), sum(B[:i]), and sum(B[i:]) values in four different arrays as follows
import numpy as np
A = [] # Array A
B = [] # Array B
A_start_to_i = np.cumsum(A) # A_start_to_i[i] = sum(A[:i])
A.reverse() # Reverse the order
A_i_to_end = np.cumsum(A) # A_i_to_end[i] = sum(A[i:])
B_start_to_i = np.cumsum(B) # B_start_to_i[i] = sum(B[:i])
B.reverse() # Reverse the order
B_i_to_end = np.cumsum(B) # B_i_to_end = sum(B[i:])
Now all you need to do is to create sum(A[:i], B[i:]) and sum(B[:i], A[i:]) and find the index with the minimum element.
first_array = A_start_to_i + B_i_to_end # first_array[i] = sum(A[:i], B[i:])
second_array = A_i_to_end + B_start_to_i # second_array[i] = sum(B[:i], A[i:])
# Find which array has the minimum element
idx = np.argmin([min(first_array), min(second_array)])
if idx == 0:
# First array has the minimum element
i = np.argmin(first_array)
else:
# Second array has the minimum element
i = np.argmin(second_array)
I am stuck at creating a matrix of a matrix (vector in this case)
What I have so far
index = zeros(size(A)) // This is some matrix but isn't important to the question
indexIndex = 1;
for rows=1:length(R)
for columns=1:length(K)
if(A(rows,columns)==x)
V=[rows columns]; // I create a vector holding the row + column
index(indexIndex) = V(1,2) // I want to store all these vectors
indexIndex = indexIndex + 1
end
end
end
I have tried various ways of getting the information out of V (such as V(1:2)) but nothing seems to work correctly.
In other words, I'm trying to get an array of points.
Thanks in advance
I do not understand your question exactly. What is the size of A? What is x, K and R? But under some assumptions,
Using list
You could use a list
// Create some matrix A
A = zeros(8,8)
//initialize the list
index = list();
// Get the dimensions of A
rows = size(A,1);
cols = size(A,2);
x = 0;
for row=1:rows
for col=1:cols
if(A(row,col)==x)
// Create a vector holding row and col
V=[row col];
// Append it to list using $ (last index) + 1
index($+1) = V
end
end
end
Single indexed matrices
Another approach would be to make use of the fact an multi-dimensional matrix can also be indexed by a single value.
For instance create a random matrix named a:
-->a = rand(3,3)
a =
0.6212882 0.5211472 0.0881335
0.3454984 0.2870401 0.4498763
0.7064868 0.6502795 0.7227253
Access the first value:
-->a(1)
ans =
0.6212882
-->a(1,1)
ans =
0.6212882
Access the second value:
-->a(2)
ans =
0.3454984
-->a(2,1)
ans =
0.3454984
So that proves how the single indexing works. Now to apply it to your problem and knocking out a for-loop.
// Create some matrix A
A = zeros(8,8)
//initialize the array of indices
index = [];
// Get the dimensions of A
rows = size(A,1);
cols = size(A,2);
x = 0;
for i=1:length(A)
if(A(i)==x)
// Append it to list using $ (last index) + 1
index($+1) = i;
end
end
Without for-loop
If you just need the values that adhere to a certain condition you could also do something like this
values = A(A==x);
Be carefull when comparing doubles, these are not always (un)equal when you expect.