We know that the knapsack problem can be solved in O(nW) complexity by dynamic programming. But we say this is a NP-complete problem. I feel it is hard to understand here.
(n is the number of items. W is the maximum volume.)
We know that the knapsack problem can be solved in O(nW) complexity by dynamic programming. But we say this is a NP-complete problem. I feel it is hard to understand here.
(n is the number of items. W is the maximum volume.)
O(n*W)
looks like a polynomial time, but it is not, it is pseudo-polynomial.
Time complexity measures the time that an algorithm takes as a function of the length in bits of its input. The dynamic programming solution is indeed linear in the value of W
, but exponential in the length of W
— and that's what matters!
More precisely, the time complexity of the dynamic solution for the knapsack problem is basically given by a nested loop:
// here goes other stuff we don't care about
for (i = 1 to n)
for (j = 0 to W)
// here goes other stuff
Thus, the time complexity is clearly O(n*W)
.
What does it mean to increase linearly the size of the input of the algorithm? It means using progressively longer item arrays (so n
, n+1
, n+2
, ...) and progressively longer W
(so, if W
is x
bits long, after one step we use x+1
bits, then x+2
bits, ...). But the value of W
grows exponentially with x
, thus the algorithm is not really polynomial, it's exponential (but it looks like it is polynomial, hence the name: "pseudo-polynomial").
W
and n
here represent the number of loop iterations. You can encode them any way you want, and that loop is still going to iterate n * W
times. I believe the reason this "pseudo-polynomial" is because n
is that actual input size, and W
can be much much larger than n
so can't fairly be treated as a constant. –
Dragoman N
weights and total sum being W
. Is this pseudo-polynomial in W
? –
Dielle for
loop that goes from 1
to n
(where n
is the input); in this case when your loop does 10^12 iterations the size of the input is still around 40 bits. The number of iterations are growing faster than the number of bits to encode the input. Time complexity is not linear. 2) again, consider a for
loop that iterates over an input array (with size n
) from 1
to n
; in case you have 10^12 iterations then it means your array contains 10^12 items. No. of iterations grow with the same pace as the size of the input. Time compl. is linear. –
Inhalator for
loop going from 1 to n is exponential ? –
Audwen In knapsack 0/1 problem, we need 2 inputs(1 array & 1 integer) to solve this problem:
Let's assume n=10 and W=8:
so the time complexity T(n) = O(nW) = O(10*8) = O(80)
If you double the size of n:
n = [n1, n2, n3, ... , n10] -> n = [n1, n2, n3, ... , n20]
so time complexity T(n) = O(nW) = O(20*8) = O(160)
but as you double the size of W, it does not mean W=16, but the length will be twice longer:
W = 1000 -> W = 10000000 in binary term (8-bit long)
so T(n) = O(nW) = O(10*128) = O(1280)
the time needed increases in exponential term, so it's a NPC problem.
It all depends on which parameters you put inside O(...)
.
If target weight is limited by number W
, then problem has O(n*W)
complexity, as you mentioned.
But if weights are way too big and you need algorithm with complexity independent of W
, then problem is NP-complete. (O(2^n*n)
in most naive implementation).
The size of input is log(W)
bits for the weight (and O(n)
for "value" and "weight" arrays).
So, the input size of weight is j = log(W)
(and not mere W
). So, W = 2ʲ
(as binary is used).
The final complexity is O(n * W)
This
O(n * W)
can be rewritten asO(n * 2ʲ)
, which exponential in the size of the input.
So, this solution is not polynomial.
This is because the knapsack problem has a pseudo-polynomial solution and is thus called weakly NP-Complete (and not strongly NP-Complete).
You can read this short explanation: The NP-Completeness of Knapsack.
To understand NP-completeness, you have to learn a bit of complexity theory. However, basically, it's NP-complete because an efficient algorithm for the knapsack problem would also be an efficient algorithm for SAT, TSP and the rest.
© 2022 - 2024 — McMap. All rights reserved.