Every np-complete problem reduces to the Halting problem. Is this true? - np-complete

I guess that every np-complete problem reduces to the np-hard problem, so the given statement is true. But don't know how to prove it.

Related

Is it possible to use the greedy solution to solve the problem of scheduling to minimize TOTAL lateness? How to solve it?

Please help! Is it possible to use the greedy solution to solve the problem of scheduling to minimize total lateness? How to solve it?
I understood the problem of scheduling to minimize maximum lateness and wanted to know how to solve the problem of scheduling to minimize total lateness. I searched through the internet and have not found a single solution.
I'm not sure what you mean by "the greedy solution", and I may have misunderstood your question, however have you considered tardiness as a performance measure?
While lateness is typically linear, and will be negative for jobs completed before the due date,
Tardiness can be thought of as,
Which is slightly nicer to work with, and some googling will reveal heuristics such as ATC to help you solve this, depending on the rest of your problem parameters.
Where
It would not be too hard to write an IP formulation to minimise the sum of tardinesses which can be defined by this constraint, given that you are not "set" on using a heuristic.
Where

How does recursion work in the Alloy Analyzer?

I see there is an option in the Alloy Analyzer to allow recursion up to a certain depth (1-3).
But what happens when a counterexample cannot be found because of the limited recursion depth?
Will there be an error or a warning, or are such counterexamples silently ignored?
Alloy basically does not support recursion. When it encounters recursion, it unrolls the code the max number of times. Therefore, if it cannot found a solution, it just, well, cannot find a solution. If it could only generate an error if it knew there was a potential solution, which would solve the original problem.
This is, imho, one of the weakest spots in Alloy. Recursion is extremely important in almost all specifications.

Why using linear integer programming (ILP) though it is NP-Complete?

The question may be stupid but it really confuses me for a long time.
I read a lot of papers in wireless sensor network. Many researchers model their problems into the form of ILP. However, ILP is NP-Complete so it is not efficient for solving a problem.
So why people write their problems into the form of ILP? Do they do that to make their problem clear to see and easy to understand? Or do I make some mistakes understanding the relations between ILP and NPC?
I am really appreciated that you can help me to solve this question.
Although the question might be considered off-topic, there are basically a few points to address.
You are right that general integer linear programming is NP-hard.
If a specific problem needs to be solved and general integer linear programming is the most specific way to formulate it, then nothing can be done about it; some problems are just hard to solve.
In some cases, it is possible to use the LP relaxation instead, either as a heuristic or some approximation ratio can be proven.
The key point here is that integer linear programming is a widespread formalism for expressing problems. Basically I understand your question as the follwing one.
"Why do people use a model that is algorithmically hard to solve to
describe practical problems?"
Well, if that shortcoming could be circumvented in general, it would be a good idea to express every problem there is in terms of sorting, which is algorithmically easy.
NP-hard refers to the complexity of algorithms in the worst case. For most NP-hard problems, we have effective algorithms (heuristic or exact) that perform well most of the time, even if they do not perform well in the worst case. ILP is therefore a very useful tool in practice, even if there are some problems that it doesn't do well on.
I have a hammer. There are some jobs that my hammer is just no good for, or would take a very long time on. But it's still a very useful tool, because it can do a lot of jobs for me very well.
ILP is, in many ways, the same thing.

If we prove there is no starvation, we don't need to prove that there is no deadlock or livelock (progress)?

I googled Peterson algorithm proof and noticed that most sites don't bother proving the progress requirement, why is that? Can someone explain?
If I understand your question right, then the answer is simple that the absence of starvation implies the absence of deadlocks/livelocks: If there is no process that starves, so every process willing to progress is eventually doing so, there can be no deadlocks/livelocks. This easily follows from the definitions of the respective notions.

Is GAP (graph accessibility) NP-Complete?

Is the GAP (graph accessibility problem) NP-Complete ?
It has polynomial and non-deterministic polynomial algorithms that solve it, but I don't think this is a criteria that overrides the basic way of showing it's NP-Complete, by showing it is NP and NP-Hard => NP-Complete.
I heard both versions from older students than me.
So in the end, is it or not NP-Complete?
Wikipedia says that the problem is NL-Complete, which means that it’s also in P. This makes it extremely unlikely that it is NP-Complete. If it was, that would prove that P=NP, which is a very old and unsolved question. And it is widely assumed that P≠NP.
You won’t be able to prove that it is not NP-Complete either, because that would prove P≠NP.
If you can prove that it is NP-Complete or that it is not NP-Complete, you will recieve an award of one million dollars.
So in summary the answer is: It seems very unlikely, but it is just as unlikely that you can prove anything in that direction :).

Resources