I am confused about the wording of 1(a)-- does it mean that there cannot be a complete tester for 3LIN of the following kind: pick a clause at random and check if it is satisfied ? Or does it mean that the above cannot hold for the 3LIN formulas spit out by the PCP reduction ?

Hi Atri. Good point -- that question is sort of worded poorly. Here is what we meant:

" Assuming P does not equal NP, show that the following statement is FALSE: 'There is a probabilistically checkable proof for an NP-complete problem in which a) three bits of the proof get queried, b) all predicates are of the form "x_{i1} + x_{i2} + x_{i3} = b (mod 2)", c) completeness is 1, and d) soundness is 51%.' "

Does that clarify? I will update the online version of the homework.

For the approximability problems 1b, 1c and 1d - what do we base the proofs off of ? Can we argue from the label-cover results you are showing us now, or should we somehow argue from the answer to (1a) ? -Anand.

Hi Anand -- as hinted, (1c) and (1d) should be based on (1b). For (1b) itself you should freely use the Hastad result stated at the beginning.

(If you can prove (1b) straight from the hardness of LabelCover mentioned today, good for you! But the proof of this will probably take us about a lecture in class...)

This is related to problem 2(b) and the lecture on 11/7:

The lecture mentions that a PCP with constant number of queries and s=(1-epsilon) implies a PCP with log(1/epsilon) queries and soundness=epsilon. Does this implication depend on Raz's parallel repetition theorem - or some variant of the argument ?

Also - doesn't this directly imply the answer to 2(b).. once you have a PCP with s=epsilon, isn't it really as good as having a 2P1R game with value epsilon ? (the verifier for the 2P1R game would use the same predicate as the PCP-verifier, and both P1 and P2 would return the same proof as the PCP prover coded suitably)

Could you help clarify ? May be I'm assuming things, I'm not allowed to..

Hi Anand -- yes, this problem might seem a bit strange in light of what we've done in class.

Indeed, using Raz's parallel repetition theorem is how one typically shows that (a) Max-CG(Sigma)_{1,eps}, (b) distinguishing value 1 and value eps 2P1R games, (c) LabelCover_{1,eps}, are all NP-hard.

Note that (c) is a special case of (b), which is a special case of (a) -- so hardness of (c) implies the hardness of the other two.

We made up Question 2 before we were sure how much we would discuss 2P1R games in class. As it turned out, we discussed them extensively, so the "hint" is now obvious -- 2P1R are like CG-problems with the extra property of being bipartite.

So, the whole point of Question 2b is: Forget Raz's theorem -- just suppose someone tells you that (a) is hard -- i.e., Max-CG_{1,eps} is hard. Show how this implies that (b) is hard -- i.e., that it's hard even when the graph is bipartite.

I.e., although it's formally harder to show hardness when restricted to the bipartite case, show that actually the non-bipartite and bipartite case are fairly easily equivalent.

The real reason we asked this question is that there is a certain further restricted version of Max-CG_{1,eps} (the "Unique Label Cover" problem) that is only *conjectured* to be NP-hard. This homework problem can be used to show that the bipartite and non-bipartite version of this conjecture are equivalent.

I am probably missing something really silly but I'll ask this anyways :-). In 4(d) if we are ok with an algorithm which runs in time n^{O(log n)} [instead of randomized n^{O(1)} as asked in the question], is the problem completely trivial: as in would running the approx algo for clique on the powered graph give the desired result ?

And since we are at this, when you say randomized polynomial time does it mean the expected running time is polynomial? or something else ?

First, by randomized polytime, we mean a Monte Carlo or BPP algorithm, i.e., algo can fail with some small probability.

Regarding the need to sample a polysized subgraph, this is best illustrated in whether one can get a N^delta hardness for clique without this. Suppose we take k'th power. The gap is 2^k (say) and the number of vertices is N^k. Thus even if k is non-constant, the gap can never be N^delta for a constant delta. Thus to get gaps of N^{Omega(1)}, one really needs to work with a smaller subgraph of the power graph.

I am pretty sure there is a typo in the probability expression in 7(c). I think instead of 3/4+ 3/4[..] it should be 3/4-3/4[..]. Am I missing something ?

## 16 Comments:

Question 1(a)I am confused about the wording of 1(a)-- does it mean that there cannot be a complete tester for 3LIN of the following kind: pick a clause at random and check if it is satisfied ? Or does it mean that the above cannot hold for the 3LIN formulas spit out by the PCP reduction ?

--atri

Hi Atri. Good point -- that question is sort of worded poorly. Here is what we meant:

" Assuming P does not equal NP, show that the following statement is FALSE: 'There is a probabilistically checkable proof for an NP-complete problem in which a) three bits of the proof get queried, b) all predicates are of the form "x_{i1} + x_{i2} + x_{i3} = b (mod 2)", c) completeness is 1, and d) soundness is 51%.' "

Does that clarify? I will update the online version of the homework.

yes it does. Thanks!

--atri

For the approximability problems 1b, 1c and 1d - what do we base the proofs off of ? Can we argue from the label-cover results you are showing us now, or should we somehow argue from the answer to (1a) ?

-Anand.

Hi Anand -- as hinted, (1c) and (1d) should be based on (1b). For (1b) itself you should freely use the Hastad result stated at the beginning.

(If you can prove (1b) straight from the hardness of LabelCover mentioned today, good for you! But the proof of this will probably take us about a lecture in class...)

This is related to problem 2(b) and the lecture on 11/7:

The lecture mentions that a PCP with constant number of queries and s=(1-epsilon) implies a PCP with log(1/epsilon) queries and soundness=epsilon. Does this implication depend on Raz's parallel repetition theorem - or some variant of the argument ?

Also - doesn't this directly imply the answer to 2(b).. once you have a PCP with s=epsilon, isn't it really as good as having a 2P1R game with value epsilon ? (the verifier for the 2P1R game would use the same predicate as the PCP-verifier, and both P1 and P2 would return the same proof as the PCP prover coded suitably)

Could you help clarify ? May be I'm assuming things, I'm not allowed to..

Thanks,

Anand.

Hi Anand -- yes, this problem might seem a bit strange in light of what we've done in class.

Indeed, using Raz's parallel repetition theorem is how one typically shows that

(a) Max-CG(Sigma)_{1,eps},

(b) distinguishing value 1 and value eps 2P1R games,

(c) LabelCover_{1,eps},

are all NP-hard.

Note that (c) is a special case of (b), which is a special case of (a) -- so hardness of (c) implies the hardness of the other two.

We made up Question 2 before we were sure how much we would discuss 2P1R games in class. As it turned out, we discussed them extensively, so the "hint" is now obvious -- 2P1R are like CG-problems with the extra property of being bipartite.

So, the whole point of Question 2b is: Forget Raz's theorem -- just suppose someone tells you that (a) is hard -- i.e., Max-CG_{1,eps} is hard. Show how this implies that (b) is hard -- i.e., that it's hard even when the graph is bipartite.

I.e., although it's formally harder to show hardness when restricted to the bipartite case, show that actually the non-bipartite and bipartite case are fairly easily equivalent.

The real reason we asked this question is that there is a certain further restricted version of Max-CG_{1,eps} (the "Unique Label Cover" problem) that is only *conjectured* to be NP-hard. This homework problem can be used to show that the bipartite and non-bipartite version of this conjecture are equivalent.

-- Ryan

In question 8b, is the square on the right hand side inside or outside the expectation?

Prasad:

Inside. It's E_x[ (f^{\leq S \cap T}(x) )^2].

-- Ryan

Question 4(d)I am probably missing something really silly but I'll ask this anyways :-). In 4(d) if we are ok with an algorithm which runs in time n^{O(log n)} [instead of randomized n^{O(1)} as asked in the question], is the problem completely trivial: as in would running the approx algo for clique on the powered graph give the desired result ?

And since we are at this, when you say randomized polynomial time does it mean the expected running time is

polynomial? or something else ?

Atri:

First, by randomized polytime, we mean a Monte Carlo or BPP algorithm, i.e., algo can fail with some small probability.

Regarding the need to sample a polysized subgraph, this is best illustrated in whether one can get a N^delta hardness for clique without this. Suppose we take k'th power. The gap is 2^k (say) and the number of vertices is N^k. Thus even if k is non-constant, the gap can never be N^delta for a constant delta. Thus to get gaps of N^{Omega(1)}, one really needs to work with a smaller subgraph of the power graph.

-- Venkat

Question 7(c)I am pretty sure there is a typo in the probability expression in 7(c). I think instead of 3/4+ 3/4[..] it should be 3/4-3/4[..]. Am I missing something ?

Atri, re 7c): Yes, you're right. That should read 3/4 - 3/4 sum_S..., not 3/4 + 3/4 sum_S...

Sorry about that -- I'll correct the online version.

r

What is the difference between Theorem 2.8 in Lecture 10 and 4 c ii)?

The c range is different but that seems a minor technical detail to me.

Sridhar: Not much, really. We're just asking you to give the results using the language of graph products...

In question 8 there is a reference to question (5a). This is a typo; we meant to reference question (6a). But you probably all figured that out :)

Post a Comment

<< Home