{"id":5829,"date":"2011-12-14T13:42:59","date_gmt":"2011-12-14T20:42:59","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=5829"},"modified":"2011-12-14T13:42:59","modified_gmt":"2011-12-14T20:42:59","slug":"qip-2012-day-3","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2011\/12\/14\/qip-2012-day-3\/","title":{"rendered":"QIP 2012 Day 3"},"content":{"rendered":"\n<p style=\"text-align: center\"><em>PSPACE is a black hole gobbling up the other complexity classes.<br \/>\nAnd you thought the LHC was dangerous!<\/em><\/p>\n<h3 style=\"text-align: center\">Troy Lee and J\u00e9r\u00e9mie Roland:<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#LR\"> A strong direct product theorem for quantum query complexity<\/a><\/h3>\n<p>Suppose you want to solve k independent instances of a problem with k-fold as many resources. For example, say you want to juggle 4 balls with two hands vs 2 balls with one hand.<br \/>\nA Strong Direct Product Theorem (SDPT) says (roughly) the following:<br \/>\nIf we require r resources to solve one instance of a problem with success probability p, then using fewer than kr resources to solve k instances cannot succeed with probability greater than p<sup>k<\/sup>.<br \/>\nIntuitively, it ought to hold in many situations, but has been proven to hold only in a few. Some applications of SDPT:<\/p>\n<ul>\n<li>Making a hard function really hard, e.g. Yao\u2019s XOR lemma<\/li>\n<li>Amplify soundness of a protocol, e.g. Raz<\/li>\n<\/ul>\n<p>Here they study Quantum Query Complexity, which uses a black box oracle that can be queried in superposition, as a proxy for quantum time complexity. It behaves so well that it scarcely deserves to be called a computational model.<br \/>\nThe big picture is that they use adversary methods and the state generation formalism due to Ambainis. The method defines a type of potential function and then bounds the rate of change from one query to the next. In the additive setting, they bound the difference, and in the multiplicative setting, they bound the ratio. The state generation method has the advantage that it allows the separation of lower bounds into two parts: the bound on the zero-error state generation and the minimization of this bound over the valid final Gram matrices of a successful protocol.<br \/>\nIt is known that the multiplicative adversary method characterizes bounded-error quantum query complexity since the multiplicative adversary is at least as large as the additive one. One of the achievements of this paper is that they show that the multiplicative adversary method still works when the convergence ratio is strictly bounded away from one. This immediately implies a SDPT by the results of Spalek `08, but they reprove the SDPT from scratch so that they can get a better optimization of the parameters.<br \/>\nConclusion and open problems: For Boolean functions, they show an XOR lemma. Does SDPT hold for all state generation problems? Is the multiplicative adversary method also tight in the case of large error?<\/p>\n<h3 style=\"text-align: center\">Andr\u00e9 Chailloux and Or Sattath:<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#CS\"> The Complexity of the Separable Hamiltonian Problem<\/a><\/h3>\n<p>First recall that QMA is the quantum analog of NP: it\u2019s the set of problems which can be verified efficiently on a quantum computer when given a quantum proof (i.e. a quantum state for a witness). For QMA wth 2 provers, denoted QMA(2), there are two unentangled states which are given as proofs. It\u2019s similar to interrogation of suspects: Author can play the two Merlins against each other and it is probably a much stronger class. (Each Merlin still sends poly(n) qubits, so QMA(2) is at least as strong as QMA, but is probably stronger.) Pure n-representability is known to be in QMA(2) but not known to be in QMA. It\u2019s known that QMA(k) = QMA(2) for all k&gt;2. We know only that QMA(2) $latex subseteq$ NEXP; this seems like a pretty weak upper bound!<br \/>\nThe main result of this paper is to find a complete problem for QMA(2). The separable <em>local<\/em> Hamiltonian problem is actually in QMA, but the separable <em>sparse<\/em> Hamiltonian problem is QMA(2) complete.<br \/>\nThe problem: given a Hamiltonian $latex H = sum_i h_i$ where each term is either local or separable, then decide whether there exists a separable state with energy at most a or if there is no state with energy less than b. (Where b-a &gt; 1\/poly(n) is the promise gap.)<br \/>\nSeparable Local Hamiltonian: Given a local Hamiltonian H find a separable (say across an n by n qubit cut) $latex rho$ that minimizes $latex mathrm{tr} H rho$.<br \/>\nThis sure doesn\u2019t look like it\u2019s in QMA!<br \/>\nIf the prover sends $latex rho$, there\u2019s no way to verify that it is a product, because you can\u2019t test entanglement with a single copy.<br \/>\nInstead, the prover sends $latex rho$ AND classical descriptions of all k-qubit marginals (k comes from H being a k-local Hamiltonian). The consistency of these marginals is in QMA; the witness is simply $latex rho$. (In fact, <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0604166\">this problem is QMA-complete<\/a>.)<br \/>\nSeparable Sparse Hamiltonian: A totally different story! Now it\u2019s QMA(2) complete. (The first known such problem.) Immediately we see that the qubit structure doesn\u2019t exist, so we can\u2019t rely on the strategy of looking at marginals. Clearly the problem is in QMA(2). To show completeness, we adapt Kitaev\u2019s Hamiltonian for encoding a circuit (his so-called clock construction.) One challenge is that if the QMA(2) verifier makes an entangled measurement, then the history state will be entangled, which is <em>not<\/em> of the form we want. We want to simulate a computation in which the <em>input<\/em> is entangled, but we don\u2019t care about the later steps; however, we are given the ability to minimize the energy of the sparse Hamiltonian for separable states, period.<br \/>\nThe trick is that <a href=\"http:\/\/arxiv.org\/abs\/1001.0017\">1001.0017<\/a> shows that QMA(2) = QMA(2)-SEP, which is defined to be the class where the verifier is restricted to making separable (i.e. non-entangling measurements, at least for the yes outcome). This measurement requires a controlled-SWAP on the two proofs, which interestingly, is sparse, but not local. You might think you can build this out of a bunch of small controlled-SWAPs, but this would produce entanglement at intermediate steps. SWAP has a complicated relationship with entanglement. It doesn\u2019t entangle product states, but only if it\u2019s applied to the entire state; applying it to subsystems <em>can<\/em> create entanglement.<br \/>\nMany of the same open questions remain, about the status of QMA vs QMA(2) (although there is evidence from the case of log-size proofs to suggest that they are different), and about whether pure N-representability is also QMA(2) complete.<\/p>\n<h3 style=\"text-align: center\">Yaoyun Shi and Xiaodi Wu:<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#SW\"> Epsilon-net method for optimizations over separable states<\/a><\/h3>\n<p>The QMA(2) session continues! Once upon a time, I (Aram) thought QMA(2) was the kind of thing that complexity theorists make up to have more things to talk about, but I\u2019ve since made a total U-turn. The problem, especially it\u2019s log-size-witness variant, is equivalent to a ridiculously large number of problems, like separability testing, mean-field Hamiltonians, computing minimum output R\u00e9nyi entropies of channels, finding the largest singular value of tensors, estimating the $latex 2rightarrow 4$ norm of matrices, and many others. So it\u2019s exciting that the only dimension-independent hardness result we know requires quantum techniques (<a href=\"http:\/\/arxiv.org\/abs\/1001.0017\">1001.0017<\/a>). And also the only (classical) approximation algorithm! (<a href=\"http:\/\/arxiv.org\/abs\/1011.2751\">1011.2751<\/a>)<br \/>\nThis talk changes this, with a pair of elementary approximation algorithms, one of which matches the performance of a weaker variant of <a href=\"http:\/\/arxiv.org\/abs\/1010.1750\">1010.1750<\/a> (details below).<br \/>\nTo understand the basic problem, we don\u2019t need to think about provers or circuits. The problem is as follows: Given a Hermitian matrix H on $latex d^2 times d^2$ dimensions, estimate $latex {rm OptSep}(H) := max { {rm tr} H rho : rho in {rm Sep}(d,d)}$, where $latex {rm Sep}(d,d)$ denotes separable states with d dimensions for each party. This is a convex optimization problem, but it\u2019s hard to test membership in $latex {rm Sep}(d,d)$; indeed, this has equivalent difficulty. Alternatively, we know that the maximum is achieved for a $latex rho$ which is a pure product state, where membership is easy to test, but this set is non-convex.<br \/>\nIt is known that performing this optimization up to 1\/poly(d) accuracy is NP-hard. (This result is originally due to <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0303055\">Gurvits<\/a>, although that\u2019s not so widely known, since most people only noticed the part of his paper which talks about the hardness of testing membership in $latex {rm Sep}(d,d)$.) Constant accuracy is only known to be as hard as solving a 3-SAT instance of length $latex log^2(n)$.<br \/>\nThis talk gives two algorithms for approximating OptSep.<br \/>\nOne works well when H is nearly product; in particular, when H can be decomposed as $latex H=sum_{i=1}^m A_i otimes B_i$. The algorithm is to compute the joint numerical range of the $latex A_1,ldots,A_m$ and $latex B_1,ldots,B_m$, meaning the sets of possible $latex ({rm tr}rho A_1, ldots, {rm tr}rho A_m)$ and of possible $latex ({rm tr}rho B_1, ldots, {rm tr}rho B_m)$. If m is not too big, then this is reasonable. Moreover, we can enumerate over the corresponding epsilon-nets with only a small amount of space. This implies that when QMA(2) is restricted to nearly product measurements (i.e. $latex m leq mathrm{poly}(n)$), then the resulting class is contained in PSPACE.<br \/>\nThe second algorithm does eigenspace enumeration directly on H, and achieves additive error $latex delta$ in time $latex {rm poly}(d) exp(|H|_2^2delta^{-2} log(|H|_2\/delta))$. This matches one of the corollaries of <a href=\"http:\/\/arxiv.org\/abs\/1011.2751\">1011.2751<\/a>, but not their main result.<\/p>\n<h3 style=\"text-align: center\">Abel Molina and John Watrous:<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#MW\"> Hedging bets with correlated quantum strategies<\/a><\/h3>\n<p>Alice and Bob are at it again, playing a nonlocal game. Here is the framework: Alice prepares a question and sends it to Bob, Bob responds by sending an answer to Alice. Based on Bob\u2019s answer, as well as whatever memory she has of her own question, Alice decides whether Bob has passed or failed a test.<br \/>\nAlice\u2019s actions are specified by a density operator and a two outcome POVM. We assume that Bob knows a complete description of Alice. Then the games that they consider here can be formulated as a semidefinite program to determine Bob\u2019s optimal success probability for passing the test.<br \/>\nLet\u2019s consider parallel repetition: Alice performs two tests independently, but Bob can correlate his strategy in an arbitrary way. Suppose that Bob\u2019s optimal probability to pass a single instantiation of a given test is p, and Alice independently runs the test twice. There are several natural questions: What is the optimal probability for Bob to pass both or at least one test? To pass k out of n independent tests? If Bob uses an independent strategy, you can just calculate the success probability of that. Can he do better by correlating his strategies?<br \/>\nIf Alice\u2019s test is classical, then Bob gains no advantage. In the quantum setting, Bob\u2019s optimal probability to pass both tests is again p<sup>2<\/sup>, so he gains no advantage by correlating the tests. Both facts can be proven using SDP duality.<br \/>\nIt would be natural to conjecture that it is also optimal for Bob to play independently if he only cares about passing <em>at least one<\/em> test. But this turns out to be false. They have a simple counterexample to this natural conjecture, and they compute Bob\u2019s optimal strategy for a simple nonlocal game. The strategy is a form of <em>hedging<\/em>.<br \/>\nThis work is in a different setting than other results with non-classical correlations of measurement outcomes. It might give insight into possible attacks in cryptography, where breaking the system only once is all you need: you don\u2019t try independent attacks, instead you entangle a joint attack! It also gives reduced errors in quantum interactive proof protocols. So far, they have proved some partial results along these lines.<br \/>\nComment by Richard Cleve: There is a classical strategy for hedging in the CHSH game which can perfectly win 1 out of 2 parallel games.<\/p>\n<h3 style=\"text-align: center\">Jop Briet and Thomas Vidick:<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#BV\"> Explicit lower and upper bounds on the entangled value of multiplayer XOR games<\/a><\/h3>\n<p>Bell\u2019s theorem says that the set of correlations $latex P[ab|xy]$ achievable quantumly is bigger than the set achievable classically. For this talk let\u2019s focus only on $latex P[aoplus b | xy]$. Let\u2019s look quantitatively at how the quantum advantage depends on parameters like the number of questions, the number of answers, the number of players, and the amount of entanglement (could also consider the type). Boosting this advantage is experimentally relevant to prove wrong all those quantum-mechanics deniers out there. You know the type.<br \/>\nFor bipartite entanglement, we understand the situation pretty well now, but what about multipartite? (Another motivation is that quantifying multipartite entanglement is tough because it\u2019s not clear what the goals are: this gives a nice clear way of keeping score.)<br \/>\nFor a desired bias ratio R (meaning the entangled value is R times the unentangled value).<\/p>\n<table>\n<tr>\n<th>\n<td>upper bound ($latex exists$ game)<\/p>\n<td> lower bound (Best possible)<\/p>\n<tr>\n<td>min questions <\/p>\n<td> $latex O(R^4)$ <\/p>\n<td> $latex Omega(R^2)$.<\/p>\n<tr>\n<td>min local dimensions <\/p>\n<td> $latex O(R^2)$<\/p>\n<td> $latex Omega(R^2)$<br \/>\n<\/table>\n<p>Here\u2019s the outline of the proof.<\/p>\n<ol>\n<li>For any matrix (of appropriate size) we construct a 3-player XOR game.<\/li>\n<li>Relate R to spectral properties of matrix.<\/li>\n<li>Existence of a good matrix that gives a large separation<\/li>\n<\/ol>\n<p>And the details.<br \/>\n1. Given R, choose n such that $latex 2^{n\/2}=R$.<br \/>\nGive each player n qubits.<br \/>\nThe game is to choose Paulis P,Q,R with probability $latex |{rm tr}M(Potimes Qotimes R)|^2$ (suitably normalized). And ask the players to output bits whose XOR gives the sign of $latex {rm tr}M(Potimes Qotimes R)$.<br \/>\n2. The entangled bias is at least the largest eigenvalue of $latex M$.<br \/>\nThe classical bias is the 2-2-2 norm, = $latex max langle M, A otimes B otimes Crangle$ where the max is taken over A,B,C with Frobenius norm $latex leq 2^{n\/4}$.<br \/>\n3. Choose $latex M$ randomly of course! Might as well make it a rank-1 projector, so the largest eigenvalue is 1, but other norms, like the 2-2-2 norm, are likely to be small.<br \/>\nSome open questions: They leave open a gap of size R<sup>2<\/sup> in the number of questions needed for an R-sized bias ratio. It would be nice to get a tight analysis. Another open question is to derandomize their construction.<\/p>\n<h3 style=\"text-align: center\">Gus Gutoski and Xiaodi Wu:<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#GW\"> Parallel approximation of min-max problems<br \/>\nwith applications to classical and quantum zero-sum games<\/a><\/h3>\n<p>Everyone knows that QIP=PSPACE. But did you also know that SQG=QRG(2)=PSPACE? (Short quantum games, and two-turn quantum refereed games, of course.) All of these follow from using the multiplicative weights update method to solve large SDPs in parallel. (<a href=\"http:\/\/www.cs.princeton.edu\/~arora\/pubs\/MWsurvey.pdf\">Here<\/a> is a review of the multiplicative weights update method that doesn\u2019t discuss quantum applications, and here is a review that does).<br \/>\nThis work concerns 2-turn quantum refereed games. Some motivation comes from complexity theory. We can consider interactive proof systems with two competing provers, one trying to prove Yes and the other trying to prove No.<br \/>\nThe key result is that SQG is contained in PSPACE thus unifying and simplifying previous results about various games and interactive proofs being equivalent to PSPACE.<br \/>\nAdditionally, they showed that public coin RG is not equal to RG unless PSPACE = EXP. Thus, the case of two competing provers is different from the case of a single untrusted prover (the usual IP situation) in which the messages to the prover <em>can<\/em> be replaced by public coins.<br \/>\nThe key technical result is a way to parallelize an SDP related to whether certain quantum states exist. SDPs in general cannot be parallelized unless NC=P, but this result is possible because it deals with normalized quantum states going through trace-preserving quantum channels, and so the multiplicative weights update method (MWUM) can give a good approximation. Indeed MWUM has previously been used to quickly find solutions to <em>classical<\/em> zero-sum games (and convex programs in general), so it is natural to examine it for quantum zero-sum games. And even though there may be many rounds of interactions, these can be modeled by a series of linear and semidefinite constraints, applied to the density matrices appearing at various intermediate steps. This &#8220;transcript representation&#8221; was originally invented by Kitaev (whose ghostly presence looms large over this entire conference) in his work on coin flipping.<br \/>\nThey also discussed penalization and rounding theorems and space efficiency.<br \/>\nThe best part was a nice artistic depiction of PSPACE as a black hole gobbling up the other complexity classes, as shown at the top of the page!<br \/>\n&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>PSPACE is a black hole gobbling up the other complexity classes. And you thought the LHC was dangerous! Troy Lee and J\u00e9r\u00e9mie Roland: A strong direct product theorem for quantum query complexity Suppose you want to solve k independent instances of a problem with k-fold as many resources. For example, say you want to juggle &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/dabacon.org\/pontiff\/2011\/12\/14\/qip-2012-day-3\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;QIP 2012 Day 3&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[22,40,63],"tags":[],"class_list":["post-5829","post","type-post","status-publish","format-standard","hentry","category-conferences","category-liveblogging","category-quantum"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5829","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=5829"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5829\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=5829"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=5829"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=5829"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}