PSPACE is a black hole gobbling up the other complexity classes.
And you thought the LHC was dangerous!
Troy Lee and Jérémie Roland:
A strong direct product theorem for quantum query complexity
Suppose you want to solve k independent instances of a problem with k-fold as many resources. For example, say you want to juggle 4 balls with two hands vs 2 balls with one hand.
A Strong Direct Product Theorem (SDPT) says (roughly) the following:
If we require r resources to solve one instance of a problem with success probability p, then using fewer than kr resources to solve k instances cannot succeed with probability greater than pk.
Intuitively, it ought to hold in many situations, but has been proven to hold only in a few. Some applications of SDPT:
- Making a hard function really hard, e.g. Yao’s XOR lemma
- Amplify soundness of a protocol, e.g. Raz
Here they study Quantum Query Complexity, which uses a black box oracle that can be queried in superposition, as a proxy for quantum time complexity. It behaves so well that it scarcely deserves to be called a computational model.
The big picture is that they use adversary methods and the state generation formalism due to Ambainis. The method defines a type of potential function and then bounds the rate of change from one query to the next. In the additive setting, they bound the difference, and in the multiplicative setting, they bound the ratio. The state generation method has the advantage that it allows the separation of lower bounds into two parts: the bound on the zero-error state generation and the minimization of this bound over the valid final Gram matrices of a successful protocol.
It is known that the multiplicative adversary method characterizes bounded-error quantum query complexity since the multiplicative adversary is at least as large as the additive one. One of the achievements of this paper is that they show that the multiplicative adversary method still works when the convergence ratio is strictly bounded away from one. This immediately implies a SDPT by the results of Spalek `08, but they reprove the SDPT from scratch so that they can get a better optimization of the parameters.
Conclusion and open problems: For Boolean functions, they show an XOR lemma. Does SDPT hold for all state generation problems? Is the multiplicative adversary method also tight in the case of large error?
André Chailloux and Or Sattath:
The Complexity of the Separable Hamiltonian Problem
First recall that QMA is the quantum analog of NP: it’s the set of problems which can be verified efficiently on a quantum computer when given a quantum proof (i.e. a quantum state for a witness). For QMA wth 2 provers, denoted QMA(2), there are two unentangled states which are given as proofs. It’s similar to interrogation of suspects: Author can play the two Merlins against each other and it is probably a much stronger class. (Each Merlin still sends poly(n) qubits, so QMA(2) is at least as strong as QMA, but is probably stronger.) Pure n-representability is known to be in QMA(2) but not known to be in QMA. It’s known that QMA(k) = QMA(2) for all k>2. We know only that QMA(2) $latex subseteq$ NEXP; this seems like a pretty weak upper bound!
The main result of this paper is to find a complete problem for QMA(2). The separable local Hamiltonian problem is actually in QMA, but the separable sparse Hamiltonian problem is QMA(2) complete.
The problem: given a Hamiltonian $latex H = sum_i h_i$ where each term is either local or separable, then decide whether there exists a separable state with energy at most a or if there is no state with energy less than b. (Where b-a > 1/poly(n) is the promise gap.)
Separable Local Hamiltonian: Given a local Hamiltonian H find a separable (say across an n by n qubit cut) $latex rho$ that minimizes $latex mathrm{tr} H rho$.
This sure doesn’t look like it’s in QMA!
If the prover sends $latex rho$, there’s no way to verify that it is a product, because you can’t test entanglement with a single copy.
Instead, the prover sends $latex rho$ AND classical descriptions of all k-qubit marginals (k comes from H being a k-local Hamiltonian). The consistency of these marginals is in QMA; the witness is simply $latex rho$. (In fact, this problem is QMA-complete.)
Separable Sparse Hamiltonian: A totally different story! Now it’s QMA(2) complete. (The first known such problem.) Immediately we see that the qubit structure doesn’t exist, so we can’t rely on the strategy of looking at marginals. Clearly the problem is in QMA(2). To show completeness, we adapt Kitaev’s Hamiltonian for encoding a circuit (his so-called clock construction.) One challenge is that if the QMA(2) verifier makes an entangled measurement, then the history state will be entangled, which is not of the form we want. We want to simulate a computation in which the input is entangled, but we don’t care about the later steps; however, we are given the ability to minimize the energy of the sparse Hamiltonian for separable states, period.
The trick is that 1001.0017 shows that QMA(2) = QMA(2)-SEP, which is defined to be the class where the verifier is restricted to making separable (i.e. non-entangling measurements, at least for the yes outcome). This measurement requires a controlled-SWAP on the two proofs, which interestingly, is sparse, but not local. You might think you can build this out of a bunch of small controlled-SWAPs, but this would produce entanglement at intermediate steps. SWAP has a complicated relationship with entanglement. It doesn’t entangle product states, but only if it’s applied to the entire state; applying it to subsystems can create entanglement.
Many of the same open questions remain, about the status of QMA vs QMA(2) (although there is evidence from the case of log-size proofs to suggest that they are different), and about whether pure N-representability is also QMA(2) complete.
Yaoyun Shi and Xiaodi Wu:
Epsilon-net method for optimizations over separable states
The QMA(2) session continues! Once upon a time, I (Aram) thought QMA(2) was the kind of thing that complexity theorists make up to have more things to talk about, but I’ve since made a total U-turn. The problem, especially it’s log-size-witness variant, is equivalent to a ridiculously large number of problems, like separability testing, mean-field Hamiltonians, computing minimum output Rényi entropies of channels, finding the largest singular value of tensors, estimating the $latex 2rightarrow 4$ norm of matrices, and many others. So it’s exciting that the only dimension-independent hardness result we know requires quantum techniques (1001.0017). And also the only (classical) approximation algorithm! (1011.2751)
This talk changes this, with a pair of elementary approximation algorithms, one of which matches the performance of a weaker variant of 1010.1750 (details below).
To understand the basic problem, we don’t need to think about provers or circuits. The problem is as follows: Given a Hermitian matrix H on $latex d^2 times d^2$ dimensions, estimate $latex {rm OptSep}(H) := max { {rm tr} H rho : rho in {rm Sep}(d,d)}$, where $latex {rm Sep}(d,d)$ denotes separable states with d dimensions for each party. This is a convex optimization problem, but it’s hard to test membership in $latex {rm Sep}(d,d)$; indeed, this has equivalent difficulty. Alternatively, we know that the maximum is achieved for a $latex rho$ which is a pure product state, where membership is easy to test, but this set is non-convex.
It is known that performing this optimization up to 1/poly(d) accuracy is NP-hard. (This result is originally due to Gurvits, although that’s not so widely known, since most people only noticed the part of his paper which talks about the hardness of testing membership in $latex {rm Sep}(d,d)$.) Constant accuracy is only known to be as hard as solving a 3-SAT instance of length $latex log^2(n)$.
This talk gives two algorithms for approximating OptSep.
One works well when H is nearly product; in particular, when H can be decomposed as $latex H=sum_{i=1}^m A_i otimes B_i$. The algorithm is to compute the joint numerical range of the $latex A_1,ldots,A_m$ and $latex B_1,ldots,B_m$, meaning the sets of possible $latex ({rm tr}rho A_1, ldots, {rm tr}rho A_m)$ and of possible $latex ({rm tr}rho B_1, ldots, {rm tr}rho B_m)$. If m is not too big, then this is reasonable. Moreover, we can enumerate over the corresponding epsilon-nets with only a small amount of space. This implies that when QMA(2) is restricted to nearly product measurements (i.e. $latex m leq mathrm{poly}(n)$), then the resulting class is contained in PSPACE.
The second algorithm does eigenspace enumeration directly on H, and achieves additive error $latex delta$ in time $latex {rm poly}(d) exp(|H|_2^2delta^{-2} log(|H|_2/delta))$. This matches one of the corollaries of 1011.2751, but not their main result.
Abel Molina and John Watrous:
Hedging bets with correlated quantum strategies
Alice and Bob are at it again, playing a nonlocal game. Here is the framework: Alice prepares a question and sends it to Bob, Bob responds by sending an answer to Alice. Based on Bob’s answer, as well as whatever memory she has of her own question, Alice decides whether Bob has passed or failed a test.
Alice’s actions are specified by a density operator and a two outcome POVM. We assume that Bob knows a complete description of Alice. Then the games that they consider here can be formulated as a semidefinite program to determine Bob’s optimal success probability for passing the test.
Let’s consider parallel repetition: Alice performs two tests independently, but Bob can correlate his strategy in an arbitrary way. Suppose that Bob’s optimal probability to pass a single instantiation of a given test is p, and Alice independently runs the test twice. There are several natural questions: What is the optimal probability for Bob to pass both or at least one test? To pass k out of n independent tests? If Bob uses an independent strategy, you can just calculate the success probability of that. Can he do better by correlating his strategies?
If Alice’s test is classical, then Bob gains no advantage. In the quantum setting, Bob’s optimal probability to pass both tests is again p2, so he gains no advantage by correlating the tests. Both facts can be proven using SDP duality.
It would be natural to conjecture that it is also optimal for Bob to play independently if he only cares about passing at least one test. But this turns out to be false. They have a simple counterexample to this natural conjecture, and they compute Bob’s optimal strategy for a simple nonlocal game. The strategy is a form of hedging.
This work is in a different setting than other results with non-classical correlations of measurement outcomes. It might give insight into possible attacks in cryptography, where breaking the system only once is all you need: you don’t try independent attacks, instead you entangle a joint attack! It also gives reduced errors in quantum interactive proof protocols. So far, they have proved some partial results along these lines.
Comment by Richard Cleve: There is a classical strategy for hedging in the CHSH game which can perfectly win 1 out of 2 parallel games.
Jop Briet and Thomas Vidick:
Explicit lower and upper bounds on the entangled value of multiplayer XOR games
Bell’s theorem says that the set of correlations $latex P[ab|xy]$ achievable quantumly is bigger than the set achievable classically. For this talk let’s focus only on $latex P[aoplus b | xy]$. Let’s look quantitatively at how the quantum advantage depends on parameters like the number of questions, the number of answers, the number of players, and the amount of entanglement (could also consider the type). Boosting this advantage is experimentally relevant to prove wrong all those quantum-mechanics deniers out there. You know the type.
For bipartite entanglement, we understand the situation pretty well now, but what about multipartite? (Another motivation is that quantifying multipartite entanglement is tough because it’s not clear what the goals are: this gives a nice clear way of keeping score.)
For a desired bias ratio R (meaning the entangled value is R times the unentangled value).
upper bound ($latex exists$ game) | lower bound (Best possible) | |
min questions | $latex O(R^4)$ | $latex Omega(R^2)$. |
min local dimensions | $latex O(R^2)$ | $latex Omega(R^2)$ |
Here’s the outline of the proof.
- For any matrix (of appropriate size) we construct a 3-player XOR game.
- Relate R to spectral properties of matrix.
- Existence of a good matrix that gives a large separation
And the details.
1. Given R, choose n such that $latex 2^{n/2}=R$.
Give each player n qubits.
The game is to choose Paulis P,Q,R with probability $latex |{rm tr}M(Potimes Qotimes R)|^2$ (suitably normalized). And ask the players to output bits whose XOR gives the sign of $latex {rm tr}M(Potimes Qotimes R)$.
2. The entangled bias is at least the largest eigenvalue of $latex M$.
The classical bias is the 2-2-2 norm, = $latex max langle M, A otimes B otimes Crangle$ where the max is taken over A,B,C with Frobenius norm $latex leq 2^{n/4}$.
3. Choose $latex M$ randomly of course! Might as well make it a rank-1 projector, so the largest eigenvalue is 1, but other norms, like the 2-2-2 norm, are likely to be small.
Some open questions: They leave open a gap of size R2 in the number of questions needed for an R-sized bias ratio. It would be nice to get a tight analysis. Another open question is to derandomize their construction.
Gus Gutoski and Xiaodi Wu:
Parallel approximation of min-max problems
with applications to classical and quantum zero-sum games
Everyone knows that QIP=PSPACE. But did you also know that SQG=QRG(2)=PSPACE? (Short quantum games, and two-turn quantum refereed games, of course.) All of these follow from using the multiplicative weights update method to solve large SDPs in parallel. (Here is a review of the multiplicative weights update method that doesn’t discuss quantum applications, and here is a review that does).
This work concerns 2-turn quantum refereed games. Some motivation comes from complexity theory. We can consider interactive proof systems with two competing provers, one trying to prove Yes and the other trying to prove No.
The key result is that SQG is contained in PSPACE thus unifying and simplifying previous results about various games and interactive proofs being equivalent to PSPACE.
Additionally, they showed that public coin RG is not equal to RG unless PSPACE = EXP. Thus, the case of two competing provers is different from the case of a single untrusted prover (the usual IP situation) in which the messages to the prover can be replaced by public coins.
The key technical result is a way to parallelize an SDP related to whether certain quantum states exist. SDPs in general cannot be parallelized unless NC=P, but this result is possible because it deals with normalized quantum states going through trace-preserving quantum channels, and so the multiplicative weights update method (MWUM) can give a good approximation. Indeed MWUM has previously been used to quickly find solutions to classical zero-sum games (and convex programs in general), so it is natural to examine it for quantum zero-sum games. And even though there may be many rounds of interactions, these can be modeled by a series of linear and semidefinite constraints, applied to the density matrices appearing at various intermediate steps. This “transcript representation” was originally invented by Kitaev (whose ghostly presence looms large over this entire conference) in his work on coin flipping.
They also discussed penalization and rounding theorems and space efficiency.
The best part was a nice artistic depiction of PSPACE as a black hole gobbling up the other complexity classes, as shown at the top of the page!
None of your links work. They try to go to, for example, http://dabacon.org/pontiff/%E2%80%9Dhttp://www.cs.princeton.edu/~arora/pubs/MWsurvey.pdf%E2%80%9D.
Thanks! Hopefully fixed now.
I think Day 1 needs the same fix…
Thanks to the team for blogging all of this!
Just a comment, in case my exposition was confusing/rushed 🙂
The results about error reduction for quantum interactions in our work are in a sense the opposite of the hedging counterexample. Indeed, if no such counterexample existed, and it was always optimal for Bob to play independently to win k interactions out of n, then we would be able to achieve error reduction in all cases. And it is by finding limitations to the strength of the hedging counterexamples that we obtain the partial results for error reduction