Recentemente, li uma prova que pretendia mostrar que um problema era fortemente NP-difícil, simplesmente reduzindo-o (em tempo polinomial) de um problema fortemente NP-difícil. Isso não fazia sentido para mim. Eu pensaria que você teria que mostrar que todos os números usados na redução e as instâncias do problema para o qual você está reduzindo foram polinomialmente limitados no tamanho do problema.
Vi então que a Wikipedia deu as mesmas instruções gerais para esse tipo de prova, mas não fiquei realmente convencida até ver a Garey & Johnson dizer basicamente a mesma coisa. Para ser específico, eles dizem: “Se é NP-difícil no sentido forte e existe uma transformação pseudo-polinomial de to , then is NP-hard in the strong sense,” and “Note that, by definition, a polynomial time algorithm is also a pseudo-polynomial time algorithm."
Of course, I take the word of Garey & Johnson on this—I just don’t understand how it can be correct, which is what I’d like some help with. Here’s my (presumably flawed) reasoning…
There are strongly NP-complete problems, and all these are (by definition) strongly NP-hard as well as NP-complete. Every NP-complete problem can (by definition) be reduced to any other in polynomial (and therefore pseudopolynomial) time. Given the statements of Garey & Johnson, it would therefore seem to me that every NP-complete problem is strongly NP-complete, and, therefore, that every NP-hard problem is strongly NP-hard. This, of course, makes the concept of strong NP-hardness meaningless … so what am I missing?
Edit/update (based on Tsuyoshi Ito’s answer):
The requirement (d) from Garey & Johnson’s definition of a (pseudo)polynomial transformation (the kind of reduction needed to confer NP-hardness in the strong sense) is that the largest numerical magnitude in the resulting instance be polynomially bounded, as a function of the problem size and maximal numerical magnitude of the original. This, of course, means that if the original problem is NP-hard in the strong sense (that is, even when its numerical magnitudes are polynomially bounded in the problem size), this will also be true of the problem you reduce to. This would not necessarily be the case for an ordinary polytime reduction (that is, one without this extra requirement).
fonte
Respostas:
According to the terminology in the paper by Garey and Johnson, polynomial-time transformations are not necessarily pseudo-polynomial transformations because it may violate item (d) in Definition 4.
fonte
To expand on Tsuyoshi's answer:
In the context of Garey and Johnson, consider a transformation from PARTITION (p. 47, Sec. 3.1) to MULTIPROCESSOR SCHEDULING (p. 65, Sec. 3.2.1, Item (7)).
The transformation (by restriction) involves settingD=12∑a∈Al(a) . But if the lengths of the tasks, the l(a) , are too large, then it cannot be the case that there exists a two-variable polynomial q2 such that, ∀I∈DΠ , Max`[f(I)]≤q2 (Max[I], Length[I]) (i.e. item (d) in the definition of a pseudo-polynomial transformation).
For instance, just consider an instance of MULTIPROCESSOR SCHEDULING where the value of all of thel(a) are exponential in the number of the l(a) (i.e. |A| ). You're still manipulating the same number of "combinatorial objects" (so to speak), but they're all extremely large. Hence, NP-complete, but not strongly NP-complete.
You might want to read Wikipedia on a related topic. For instance, we have a dynamic programming-based polynomial-time algorithm for the NP-complete KNAPSACK problem -- at least, as long as the numbers are small enough. When the numbers get too big, this "polynomial-time" algorithm will display "exponential behavior." (G&J, p. 91, Sec 4.2)
fonte