A slightly faster Bellman-Ford - graph

I made a slight modification to Bellman-Ford so that it only does "useful" relaxes. That is, relaxations that meant d(v) was updated.
define Relax(u, v):
if d(v) > d(u) + w(u,v) //w(u,v) = weight of edge u->v
d(v) = d(u) + w(u,v)
INIT // our usual initialization.
Queue Q
Q ← s // Q holds vertices whose d(v) values have been updated recently.
While (Q not empty)
{
u ← Frontof(Q);
for each neighbor v of u
{
Relax(u, v)
if d(v) was updated by Relax and v not in the Q //Here's where we're a bit smarter
ADD v to End of Q. //since we add to Q if
//the relaxation changed d(v)
}
}
Now, if all shortest paths have at most k arcs. Then the worst-case runtime is O(V*k) since we only go through k arcs in this smart version. This is a bit faster than the original O(V*E) since |k| < |E|
Can anyone please tell me of a type of graph for which this improved version is no better than the original Bellman-Ford algorithm? That is, for which the best-case performance is O(V*E)

Consider the graph where all edges have negative weight. In this graph, vertex u will be added to Q multiple times if it has multiple incomming edges.
The statement |k| < |E| is incorect: if there is a negative loop in graph, then k is infinite

Related

Dijsktra worst-case complexity sequence of inputs

I am looking for a sequence of inputs for the Dijsktra algorigthm implemented with a regular heap, where Dijsktras actual complexity would be Θ((e+v)logv).
I know how to implement Dijsktra and how it works, I also understand that the most time consuming operations are adding a vertex to the heap and changing the distance of a vertex. However, I am not sure how to find a graph (sequence of graphs) that would be the worst case inputs for Dijkstra.
Also if you had any general tips on how to find a sequence of inputs for the worst case complexity, that would be helpful.
Let vertices be numbered from 1 to n and you want to find path from vertex 1 to vertex n. Let e[i][j] be length of edge, connecting i and j. Initially e[1][2] = e[2][3] = ... = e[n - 1][n] = 1. Now we iterate through the vertices from n - 2 to 1. In i-th vertex for each j in [i + 2, n] we make e[i][j] = e[i][i + 1] + e[i + 1][j] + 1.
Now we have full graph. In each iteration dijkstra will update O(n) vertices, so it's O(n ^ 2) = O(E) actions working in O(log n).
So final asymptotics will be O(n log(n) + E log(n))

Minimum Weight Triangulation Dynamic Programming Algorithm

So, I'm trying to understand the dynamic programming algorithm for finding the minimum weighted triangulation decomposition of a convex polygon. For those of you that don't know, triangulation is where we take a convex polygon, and break it up into triangles. The minimum weighted triangulation is the triangulation of a polygon where the sum of all the edges(or perimeter of every triangle) is the smallest.
It's actually a fairly common algorithm, however I just can't grasp it. Here is the algorithm I'm trying to understand:
http://en.wikipedia.org/wiki/Minimum-weight_triangulation#Variations
Here's another description I'm trying to follow(Scroll down to 5.2 Optimal Triangulations):
http://valis.cs.uiuc.edu/~sariel/teach/notes/algos/lec/05_dprog_II.pdf
So I understand this much so far. I take all my vertices, and make sure they are in clockwise order around the perimeter of the original polygon. I make a function that returns the minimum weight triangulation, which I call MWT(i, j) of a polygon starting at vertex i and going to vertex j. This function will be recursive, so the first call should be MWT(0, n-1), where n is the total number of vertices. MWT should test all the triangles that are made of the points i, j, and k, where k is any vertex between those. Here's my code so far:
def MWT(i, j):
if j <= i: return 0
elif j == i+1: return 0
cheap_cost = float("infinity")
for k in range(i, j):
cheap_cost = min(cheap_cost, cost((vertices[i], vertices[j], vertices[k])) + MWT(i, k) + MWT(k, j))
return cheap_cost
However when I run it it overflows the stack. I'm just completely lost and would appreciate if somebody could help direct me in the right direction.
If you guys need any more info just ask.
I think that you want to do
for k in range(i+1, j):
not
for k in range(i, j):
because you never want k to be the same as i or j (otherwise you'll just calculate it for the same values that you're currently running).

Bipartite connected graph proof

A friend presented me with a conjecture that seems to be true but neither of us can come up with a proof. Here's the problem:
Given a connected, bipartite graph with disjoint non-empty vertex sets U and V, such that |U|<|V|, all vertices are in either U or V, and there are no edges connecting two vertices within the same set, then there exists at least one edge which connects vertices a∈U and b∈V such that degree(a)>degree(b)
It's trivial to prove that there is at least one vertex in U with degree higher than one in V, but to prove that a pair exists with an edge connecting them is stumping us.
For any edge e=(a,b) with a∈U and b∈V, let w(e)=1/deg(b)-1/deg(a). For any vertex x, the sum of 1/deg(x) over all edges incident with x equals 1, because there are deg(x) such edges. Hence, the sum of w(e) over all edges e equals |V|-|U|. Since |V|-|U|>0, w(e)>0 for som edge e=(a,b), which means that deg(a)>deg(b).
Prove it by contradiction, i.e. suppose that deg(a) ≤ deg(b) ∀(a,b)∈E, where E is the edgeset of the graph (with the convention that the first element is in U and the second in V).
For F⊆E, designate by V(F) the subset of V which is reachable through edgeset F, that is:
V(F) = { b | (a,b)∈F }
Now build an edgeset F as follows:
F = empty set
For a ∈ U:
add any edge (a,b)∈E to F
Keep adding arbitrary edges (a,b)∈E to F until |V(F)| = |U|
The set V(F) obtained is connected to all nodes in U, hence by our assumption we must have
∑a∈U deg(a) ≤ ∑b∈V(F) deg(b)
However, since |U|=|V(F)| and |U|<|V| we know that there must be at least one "unreached" node v∈V\V(F), and since the graph is connected, deg(v)>0, so we obtain
∑a∈U deg(a) < ∑b∈V deg(b)
which is impossible; this should be an equality for a bipartite graph.

Proof that Dominating Set is NP-Complete

here is the question. I am wondering if there is a clear and efficient proof:
Vertex Cover: input undirected G, integer k > 0. Is there a subset of
vertices S, |S|<=k, that covers all edges?
Dominating Set: input undirected G, integer k > 0. Is there a subset of
vertices S, |S|<= k, that dominates all vertices?
A vertex covers it's incident edges and dominates it's neighbors and itself.
Assuming that VC is NPC, prove that DS is NPC.
There is a quite nice and well known reduction:
Given an instance (G,k) of Vertex Cover build an instance of Dominating Set (H,k), where for H you take G, remove all isolated vertices, and for every edge (u,v) add an additional vertex x connected to u and v.
First realize that a Vertex Cover of G is a Dominating Set of H: it's a DS of G (after removing isolated vertices), and the new vertices are also dominated. So if G has a VC smaller k, then H has a DS smaller k.
For the converse, consider D, a Dominating Set of H.
Notice that if one of the new vertices is in D, we can replace it with one of it's two neighbors and still get an Dominating Set: it's only neighbors are are the two original vertices and they are also connected - everything x can possible dominate is also dominated by u or v.
So we can assume that D contains only vertices from G. Now for every edge (u,v) in G the new vertex x is dominated by D, so either u or v is in D. But this means D is a Vertex Cover of G.
And there we have it: G has a Vertex Cover smaller k if and only if H has a Dominating Set smaller k.
Taken from :
CMSC 651 Advanced Algorithms , Lecturer Samir Khuller
I think that second problem is not NP.
Let's try the following algorithm.
1. Get the original Graph
2. Run any algorithm which checks if a graph is connected or not.
3. mark all used edges of step 2
4. if the graph is connected then return the set of marked edges otherwise there is no such a set.
If I understood correctly your problem then it is not NP Complete.

What is the relaxation condition in graph theory

I'm trying to understand the main concepts of graph theory and the algorithms within it. Most algorithms seem to contain a "Relaxation Condition" I'm unsure about what this is.
Could some one explain it to me please.
An example of this is dijkstras algorithm, here is the pseudo-code.
1 function Dijkstra(Graph, source):
2 for each vertex v in Graph: // Initializations
3 dist[v] := infinity // Unknown distance function from source to v
4 previous[v] := undefined // Previous node in optimal path from source
5 dist[source] := 0 // Distance from source to source
6 Q := the set of all nodes in Graph
// All nodes in the graph are unoptimized - thus are in Q
7 while Q is not empty: // The main loop
8 u := vertex in Q with smallest dist[]
9 if dist[u] = infinity:
10 break // all remaining vertices are inaccessible from source
11 remove u from Q
12 for each neighbor v of u: // where v has not yet been removed from Q.
13 alt := dist[u] + dist_between(u, v)
14 if alt < dist[v]: // Relax (u,v,a)
15 dist[v] := alt
16 previous[v] := u
17 return dist[]
Thanks
Relaxation step:
You have two nodes, u and v
For every node, you have a tentative distance from the source node (for all nodes except for the source, it starts at positive infinity and it only decreases up to reaching its minimum).
The relaxation step basically is asking this:
I already know that I can reach v with some path of distance dist[v]. Could I improve on this by going to v through u instead? (where the distance of the latter would be dist[u] + weight(u, v))
Graphically:
s ~~~~~~~> v
\ ^
\ |
\~~~~~> u
You know some path s~>v which has distance dist[v], and you know some path s~>u which has distance dist[u]. If dist[u] + weight(u, v) < dist[v], then the path s~>u->v is shorter than s~>v, so you'd better use that one!
(I write a~>b to mean a path of any length from a to b, while a->b I mean a direct single edge from a to b).
You may also want to check this lecture: http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-046JFall-2005/VideoLectures/detail/embed17.htm
One of the meanings of the english word "relaxation" is decreasing something. Because at lines 14,15 and 16 you are essentially checking if you can decrease(optimize) the currently computed distance, I guess that's why it is called "relaxation condition".
One way to remember is:
. relax the muscle to reduce tension or stress on it
So we say:
. relax outgoing edges of closest vertex in Dijkstra's algorithm
. relax edges repeatedly in Bellman-Ford algorithm
Both implying to try to reduce distance from start node to reach vertex which is on the other end of edge

Resources