In my research I confront a variant of the vertex cover problem as follows:
Given a graph G, a vertex v and a number k, to decide whether G has a vertex cover of size k
that contain v.
I have search all over literature and could not find a similar problem. I am interested in the complexity of this problem ( I have proved that it complete for $P^NP[long]$ ).
The question is have you ever seen such variant of vertex cover problem? How do you call this problem ?
Given a graph G and a integer K, to decide whether G has a vertex cover of size K is the decision problem of minimal vertex cover problem. And it is NP-complete.
If fact, the problem you described is no difference with that one. That is because if you have contained vertex v, you can remove v and all edges having v as an end-point. What you should do next is to decide whether you can cover the left sub-graph with k-1 vertices.
Related
My graph is as follows:
I need to find a maximum weight subgraph.
The problem is as follows:
There are n Vectex clusters, and in every Vextex cluster, there are some vertexes. For two vertexes in different Vertex cluster, there is a weighted edge, and in the same Vextex cluster, there is no edge among vertexes. Now I
want to find a maximum weight subgraph by finding only one vertex in each
Vertex cluster. And the total weight is computed by adding all weights of the edges between the selected vertex. I add a picture to explain the problem. Now I know how to model this problem by ILP method. However, I do not know how to solve it by an approximation algorithm and how to get its approximation ratio.
Could you give some solutions and suggestions?
Thank you very much. If any unclear points in this description,
please feel free to ask.
I do not think you can find an alpha-approx for this problem, for any alpha. That is because if such an approximation exists, then it would also prove that the unique games conjecture(UGC) is false. And disproving (or proving) the UGC is a rather big feat :-)
(and I'm actually among the UGC believers, so I'd say it's impossible :p)
The reduction is quite straightforward, since any UGC instance can be described as your problem, with weights of 0 or 1 on edges.
What I can see as polynomial approximation is a 1/k-approx (k the number of clusters), using a maximum weight perfect matching (PM) algorithm (we suppose the number of clusters is even, if it's odd just add a 'useless' one with 1 vertex, 0 weights everywhere).
First, you need to build a new graph. One vertex per cluster. The weight of the edge u, v has the weight max w(e) for e edge from cluster u to cluster v. Run a max weight PM on this graph.
You then can select one vertex per cluster, the one that corresponds to the edge selected in the PM.
The total weight of the solution extracted from the PM is at least as big as the weight of the PM (since it contains the edges of the PM + other edges).
And then you can conclude that this is a 1/k approx, because if there exists a solution to the problem that is more than k times bigger than the PM weight, then the PM was not maximal.
The explanation is quite short (lapidaire I'd say), tell me if there is one part you don't catch/disagree with.
Edit: Equivalence with UGC: unique label cover explained.
Think of a UGC instance. Then, every node in the UGC instance will be represented by a cluster, with as many nodes in the cluster as there are colors in the UGC instance. Then, create edge with weight 0 if they do not correspond to an edge in the UGC, or if it correspond to a 'bad color match'. If they correspond to a good color match, then give it the weight 1.
Then, if you find the optimal solution to an instance of your problem, it means it corresponds to an optimal solution to the corresponding UGC instance.
So, if UGC holds, it means it is NP-hard to approximate your problem.
Introduce a new graph G'=(V',E') as follows and then solve (or approximate) the maximum stable set problem on G'.
Corresponding to each edge a-b in E(G), introduce a vertex v_ab in V'(G') where its weight is equal to the weight of the edge a-b.
Connect all of vertices of V'(G') to each other except for the following ones.
The vertex v_ab is not connected to the vertex v_ac, where vertices b and c are in different clusters in G. In this manner, we can select both of these vertices in an stable set of G' (Hence, we can select both of the corresponding edges in G)
The vertex v_ab is not connected to the vertex v_cd, where vertices a, b, c and d are in different clusters in G. In this manner, we can select both of these vertices in an stable set of G' (Hence, we can select both of the corresponding edges in G)
Finally, I think you can find an alpha-approximation for this problem. In other words, in my opinion the Unique Games Conjecture is wrong due to the 1.999999-approximation algorithm which I proposed for the vertex cover problem.
Is vertex coloring of a hypergraph with no uniformity restriction NP-hard? I have seen papers that show vertex coloring for a k-unoform hypergraph is NP-hard. However I could not find any source that explicitly says whether or not a vertex coloring in the general case (not just k-uniform) hypergraph is NP-hard.
Before answering this question, there are many things have to be explained such as coloring and uniformity in hypergraphs. I will use here different notations.
A k-coloring of a hypergraph H = (V, E) is a function assigning colors from {1, 2, . . . , k} to vertices of H in such a way that no edge is monochromatic (no edge has all vertices of the same color - besides singletons).
The chromatic number of a hypergraph H, is the smallest integer k for which H admits a k-coloring.
A hypergraph H=(V,E) is called r-uniform, If all edges have cardinality (size) exactly r. The cardinality of an hyperedge (e) is the number of vertices in (e).
You have already found that a k-coloring for r-uniform hypergraph, r>=3, is NP-hard. If this is true (which is true) then it is NP-hard for general hypergraphs, because this is the smaller problem than general hypergraphs.
To convince you that this is true, let's have a look to the Berg definition of r-uniform hypergraph 1. This is equivalent to the above definition.
Let's denote r(H)=Max|Ei|, and s(H)=min|Ei|. H is r-uniform hypergraph if r(H)=s(H). Now if I can color this in polynomail time, which means I found the smallest integer k for which H admits a k-coloring. Then for general hypergraphs when s(H) could be smaller than r(H), we will be able to color the vertices in polynomial time.
To find the exact value of the chromatic number of a hypergraph is NP-hard.
I want to write an algorithm that finds an optimal vertex cover of a tree in linear time O(n), where n is the number of the vertices of the tree.
A vertex cover of a graph G=(V,E) is a subset W of V such that for every edge (a,b) in E, a is in W or b is in W.
In a vertex cover we need to have at least one vertex for each edge.
If we pick a non-leaf, it can cover more than one edge.
That's why I thought we can do it as follows:
We visit the root of the tree, then we visit one of its children, a child of the latter that we visited and so on.
Then if we have reached at a leaf, we check if we already have taken its father for the optimal vertex cover, and if not we pick it. Then, if the vertex that we picked for the optimal vertex cover has also other children, we pick the first of them and visit the leftmost children recursively and if we reach at the leaf and its father hasn't been chosen for the desired vertex cover, we choose it and so on.
I have written the following algorithm:
DFS(node x){
discovered[x]=1;
for each (v in Adj(x)){
if discovered[v]==0{
DFS(v);
if (v->taken==0){
x<-taken=1;
}
}
}
}
I thought that its time complexity is
(|V_i|, |E_i| are the number of vertices and edges respectively of the subtrees at the root of which we call DFS )
Is the time complexity I found right? Or have I calculated it wrong?
EDIT: Is the complexity of the algorithm described by the recurrence relation:
T(|V|)=E*T(|V|-1)+O(1)
? Or am I wrong?
I have this question form the Sedgewick's course on algorithms: "Critical edge. Given an edge-weighted digraph, design an E*log(V) algorithm to find an edge whose removal causes the maximal increase (possibly infinite) in the length of the shortest path from s to t. Assume all of the edge weights are strictly positive. (Hint: compute the shortest path distances d(v) form s to v and consider the reduced costs c′(v,w)=c(v,w)+d(v)−d(w) ≥ 0.)"
I've read on the internet that three (3) guys in 1989 came up with an algorithm of complexity O(E + V*log(V)) what required advanced data structures, and I think it was on a graph (not digraph). If it got three advanced computer scientist to develop this algorithms, is not it too much of a problem for an introductory course? But maybe it is much easier for just O(E*log(V)).
Can you help me to solve it? I don't understand the hint given in the question.
Here is a sketch of an algorithm to find the critical edge on a shortest path, based on Sedgewick's hint.
First, the reduced cost c′(v,w)=c(v,w)+d(v)−d(w) corresponds to the increase in the length of the shortest path from s to w, when going through v just before w. (If v is in the shortest path from s to w then this increase is 0.) (Indeed d(v) is the length of the shortest path from s to v and c(v, w) the cost to go from v to w.)
Suppose the shortest path from s to t is (s, ..., v, t) and that we remove the last edge (v, t). Then the increase in the length of the shortest path from s to t equals the minimum of the c'(u, t) for all in-edges (u, t), with u != v.
Suppose u is such that c'(u, t) is the minimum (still u != v). Then follow the shortest path from s to u backward, until you reach a vertex, say w, belonging to the shortest path from s to t (without any removed edge). The shortest path from s to t is something like (s, ..., w, ..., v, t).
Observe that if you remove any edge between w and t, you will get a maximum increase of c'(u, t) int the shortest path. Indeed, in case one of the edges between w and t is missing, it suffices to go from w to t through the vertex u. On the other hand, note that removing the last edge (v, t) will cause exactly this increase.
Now, just iterate with w what was done with t. Find a vertex x such that c'(x, w) is mininum and x is not on the shortest path. Follow the shortest path from s to x backward until you reach a vertex belonging to the shortest path from s to w.
Once you reach s then you're able to determine which vertex to remove to cause the maximum increase in the lenght of the shortest path.
This is a confusing question, I agree. Here are some my thoughts about it.
The "reduced cost" term and definition is used when reducing the A* search algorithm to Dijkstra's algorithm by replacing the original cost with the reduced cost:
c′(v,w) = c(v,w) - h(v) + h(w) = c(v,w) - (h(v) - h(w)) > 0
The h(v) - h(w) part is a drop of a heuristic function, which should not be more than the edge cost in case of consistent (monotonic) heuristic, thus the reduced cost is still greater than 0 (see slides 14 and 15 here)
It looks like Sedgewick suggests using the original distance function (d(v)) as a consistent heuristic when searching for the new/replacement shortest path in G' which is the same as the original G, but with one removed edge along the original shortest path from s to t. Personally, I don't see how it might help solving the most vital edge problem in O(ElogV) though.
There is also a similar problem: find all downward and upward critical edges in a graph. By definition, decreasing a cost of downward critical edge decreases the overall SP cost. Increasing a cost of upward critical edge increases the overall SP cost. All critical edges can be found in O(ElogV), see ch.8 here. But this does not answer the question what edge is the most critical (causes the max SP increase when removed).
As you noted, the most vital edge problem was solved by Malik, Mittal and Gupta (1989) in
O(E + V*log(V)) time. I have not found the original MMG paper, but this presentation explains it quite well. As far as I can see, it can be solved with a priority queue, no specific data structures required.
Sorry for not actually answering the original question (solution for most vital edge in a digraph using reduced costs), but still hoping that the links and thoughts above might be useful for someone. I would be happy to see the solution meant by Sedgewick.
To find a minimum Dominating Set of an undirected Graph G you can use a greedy algorithm like this:
Start with an empty set D. Until D is a dominating Set, add a vertex v with maximum number of uncovered neighbours.
The algorithm generally does not find the optimal solution, it is a ln(Delta)-approximation. (If Delta is the maximum degree of a vertex in G)
Now I am looking for a simple example where the greedy algorithm does not find the optimal solution. The only one I found is a related instance of the set cover problem. (http://en.wikipedia.org/wiki/Set_cover_problem#Greedy_algorithm picture on the right)
Translating this one to a graph would cause a minimum of 14 vertices and a lot of edges.
Does anyone know a small example?
Thanks in advance
Consider the following graph:
A greedy approach will choose B then D and G. Meanwhile, E and F form a dominating set.