Dijsktra worst-case complexity sequence of inputs - graph

I am looking for a sequence of inputs for the Dijsktra algorigthm implemented with a regular heap, where Dijsktras actual complexity would be Θ((e+v)logv).
I know how to implement Dijsktra and how it works, I also understand that the most time consuming operations are adding a vertex to the heap and changing the distance of a vertex. However, I am not sure how to find a graph (sequence of graphs) that would be the worst case inputs for Dijkstra.
Also if you had any general tips on how to find a sequence of inputs for the worst case complexity, that would be helpful.

Let vertices be numbered from 1 to n and you want to find path from vertex 1 to vertex n. Let e[i][j] be length of edge, connecting i and j. Initially e[1][2] = e[2][3] = ... = e[n - 1][n] = 1. Now we iterate through the vertices from n - 2 to 1. In i-th vertex for each j in [i + 2, n] we make e[i][j] = e[i][i + 1] + e[i + 1][j] + 1.
Now we have full graph. In each iteration dijkstra will update O(n) vertices, so it's O(n ^ 2) = O(E) actions working in O(log n).
So final asymptotics will be O(n log(n) + E log(n))

Related

How to check whether graph of n vertex contains n/k disjoint k - complete graph by linear programming?

Edges are given in form of Xij, which denotes whether there is edge in between i'th and j'th vertex. I am solving integer optimization problem and want to add this constraint to it.
Recently, I got a solution of it and wanted to share it.
I think this two conditions are sufficitent and necessary.
∀i Xi1 + Xi2...... XiN = K - 1
∀i, ∀j, ∀k Xij + Xjk + Xik != 2

How does |E| differ from E and |V| differ from V?

I am looking at code to find complexity of deep first search (DFS). How does O(|V| + |E|) differ from O(V+E). What is the meaning of |·| around V and E? Does it mean the "sum of all vertices" is represented as |V| ?
V and E are a sets. So O(E) would be undefined. |·| gives you the number of elements in the set, it's called cardinality in Maths. For a graph G = (V,E) |E| is the number of edges, |V| is the number of vertices.
When people write O(V+E) they actually mean O(|V| + |E|) and most readers will understand it that way, because it is the only plausible explanation. Still it is unclean and I would not use it myself.
|V| typically means the cardinality (number of elements) in V.
So O(|V| + |E|) means "on the order of the number of elements in V plus the number of elements in E."

Why is time complexity for BFS/DFS not simply O(E) instead of O(E+V)?

I know there's a similar question in stack overflow, where one person has asked, why time complexity of BFS/DFS is not simply O(V).
The appropriate answer given was that E can be as large as V^2 in case of complete graph, and hence it is valid to include E in time complexity.
But, if V cannot be greater than E+1. So, in that case not having V in the time complexity, should work?
If it is given that E = kV + c, for some real constants k and c then,
O(E + V) = O(kV + c + V) = O(V) = O(E) and your argument is correct.
An example of this is trees.
In general (i.e., without any prior information), however, E = O(V^2), and thus we cannot do better than O(V^2).
Why not write just O(E) always?
EDIT: The primary reason for always writing O(E + V) is to avoid ambiguity.
For example, there might be cases when there are no edges in the graph at all (i.e. O(E) ~ O(1)). Even for such cases, we'll have to go to each of the vertex (O(V)), we cannot finish in O(1) time.
Thus, only writing O(E) won't do in general.
V has to be included because both BFS and DFS rely on arrays of size |V| to track which vertices have been processed/discovered/explored (whatever the case may be). If a graph has 0 edges and 100000 vertices, such arrays will still take more time to initialize than they would if there were only 5 vertices. Thus, the time complexities of BFS and DFS scale on |V|.

longest path in a directed weighted graph from a given vertex to another one

The graph is positive weighted and might be acyclic or not.
input file consists of
vertex number, edge number, begining vertex, ending vertex
edge1(from, to, weight)
edge2(from, to, weight)
and so on.
the length of the path will be infinite if there is cycle in the graph and will be 0 if there is no way
the way I do is that I remove the same edges with less lengths and use bellman ford or dijkstra's algorithm in adjecent list or matrix and both work fine.
however, program should find the path at most 2 seconds and some input files contain 10000 vertices and 100000 edges
what should I do?
The time limit is 2 sec, it means the program should have near about ~10^6 iterations. See the limits of V = 10000 and e = 100000. This means an algorithm of O(V) or O(E) or O(V + E) or even O(E + VlogV)will easily compute your requirement well in given time.
Note E + Vlogv ~ (100000 + ~132877) which is less than 10^6
// This 10^6 limit is quit good for processors with 10^9 Hz frequency. So, even if your algo has 1000 instructions for each iteration, you will be in safe zone.
So, here is the proposed algorithm:
You will build the graph in O(E). Use Adjacency list data structure to represent the graph.
-> While building this data structure, store indegree of each vertex and also store count of vertices.
countV := V;
foreach edge_i
inDegree[to] := inDegree[to] + 1;
Check if there is cycle in the graph in O(V + E).
if no vertex with indegree = 0 and countV = 0
graph has no cycle
else if no vertex with indegree = 0 and countV != 0
graph has cycle
else select any vertex having 0 indegree
countV := countV - 1;
decrease all its directed neighbours' inDegree by 1.
So if you get a cycle, your answer is directly infinite.
Make a BFS or DFS to get whether the ending vertex is reachable from beginning vertex or not. This can be done in O(V + E) or even O(E). Let us take O(V + E). If not reacable your answer is directly 0.
Now, apply dijkstra but in relax condition, just check the opposite. i.e in the pseudocode given here, instead of doing
if alt < dist[v]:
dist[v] := alt;
do
if alt > dist[v]:
dist[v] := alt;
This can be done in O(E + VlogV). Hence overall complexity of the solution will be O(E + VlogV) which is well in constraints.

What is the relaxation condition in graph theory

I'm trying to understand the main concepts of graph theory and the algorithms within it. Most algorithms seem to contain a "Relaxation Condition" I'm unsure about what this is.
Could some one explain it to me please.
An example of this is dijkstras algorithm, here is the pseudo-code.
1 function Dijkstra(Graph, source):
2 for each vertex v in Graph: // Initializations
3 dist[v] := infinity // Unknown distance function from source to v
4 previous[v] := undefined // Previous node in optimal path from source
5 dist[source] := 0 // Distance from source to source
6 Q := the set of all nodes in Graph
// All nodes in the graph are unoptimized - thus are in Q
7 while Q is not empty: // The main loop
8 u := vertex in Q with smallest dist[]
9 if dist[u] = infinity:
10 break // all remaining vertices are inaccessible from source
11 remove u from Q
12 for each neighbor v of u: // where v has not yet been removed from Q.
13 alt := dist[u] + dist_between(u, v)
14 if alt < dist[v]: // Relax (u,v,a)
15 dist[v] := alt
16 previous[v] := u
17 return dist[]
Thanks
Relaxation step:
You have two nodes, u and v
For every node, you have a tentative distance from the source node (for all nodes except for the source, it starts at positive infinity and it only decreases up to reaching its minimum).
The relaxation step basically is asking this:
I already know that I can reach v with some path of distance dist[v]. Could I improve on this by going to v through u instead? (where the distance of the latter would be dist[u] + weight(u, v))
Graphically:
s ~~~~~~~> v
\ ^
\ |
\~~~~~> u
You know some path s~>v which has distance dist[v], and you know some path s~>u which has distance dist[u]. If dist[u] + weight(u, v) < dist[v], then the path s~>u->v is shorter than s~>v, so you'd better use that one!
(I write a~>b to mean a path of any length from a to b, while a->b I mean a direct single edge from a to b).
You may also want to check this lecture: http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-046JFall-2005/VideoLectures/detail/embed17.htm
One of the meanings of the english word "relaxation" is decreasing something. Because at lines 14,15 and 16 you are essentially checking if you can decrease(optimize) the currently computed distance, I guess that's why it is called "relaxation condition".
One way to remember is:
. relax the muscle to reduce tension or stress on it
So we say:
. relax outgoing edges of closest vertex in Dijkstra's algorithm
. relax edges repeatedly in Bellman-Ford algorithm
Both implying to try to reduce distance from start node to reach vertex which is on the other end of edge

Resources