QIP 2015 zombie-blogging, Day 5

Today’s highlight: an algorithm running in time O(n^{59}), also known as “polynomial time” or “efficient”.

Joseph Fitzsimons and Thomas Vidick.
A multiprover interactive proof system for the local Hamiltonian problem(Plenary Talk)
abstract arXiv:1409.0260

Thomas begins by reminding everyone about what NP is and mentions that Super Mario is NP-complete. However, I solved it as a child and saved the princess from Bowser. I’m not sure what the implications of this are for P vs. NP. Knowing that Super Mario is in NP saves me from having to learn it. I just reduce it to 3-SAT.
All problems in NP have a local verification procedure because of the NP-completeness of 3-SAT. But we still need to know the whole proof, right?
There is a nice way to make proof verification completely local, and that is to introduce more than one prover. We introduce a familiar cast of characters: Merlin and Arthur. Arthur is the referee and is allowed to ask two non-communicating Merlins to prove something to him. The value is defined to be \omega(G) = \sup_{\textrm{Merlins}} \textrm{Pr}[\text{Arthur accepts}]. We need to ensure that this scheme is both sound and complete. A stronger version known as the PCP theorem has the following interesting property. Arthur can do some pre-processing and then just check a constant number of clauses to get a high probability of soundness.
The 3-local Hamiltonian problem is a well-known complete problem for QMA. Here one is given a local Hamiltonian on n qubits with a promise that the ground state energy is either less than a or greater than b, with a-b \ge 1/\mathrm{poly}(n) (the “promise gap”), and one must decide which is the case.
In the quantum setting, we allow the Merlins to share initial entanglement, but they can’t communicate after that. Now the value is denoted by \omega^*(G) = \sup_{\textrm{Merlins}} \textrm{Pr}[\text{Arthur accepts}]. The star denotes this additional quantum entanglement shared by the Merlins.
Can Arthur use the entangled Merlins to his advantage, to recognize languages beyond NP? The Peres-Mermin “magic square” game makes this unclear since at least in some cases the Merlins can “cheat” and use entanglement to increase the acceptance probability. But it was shown [KKMTV 08, IKM 09] that Arthur can still use entangled Merlins to at least recognize languages in NP. An aside: this, like most previous work, viewed the entanglement between the Merlins mostly as a drawback to be mitigated using techniques like monogamy relations that force the provers to use strategies that resemble unentangled strategies.
To illustrate the difficulties, suppose we have a 1D Hamiltonian with nearest-neighbor interactions. Suppose that these are anti-ferromagnetic so that the unique ground state of each two-qubit term is the singlet, which we say has zero energy. This is of course highly frustrated and the ground-state energy will be proportional to the number of qubits. But a naive two-prover proof system would allow us to be convinced that the ground-state energy is zero. Suppose we can split the qubits into even and odd layers that are mutually commuting. We can have Merlin-1 take the odd qubits and Merlin-2 take the even qubits. We choose a random interaction, say on sites j and j+1, and ask Merlin-1 for one of them and Merlin-2 for the other. But this doesn’t work. The Merlins need only share a singlet state which they just return regardless of which question we ask.
The main result is a five-player game for the 3-local Hamiltonian problem. The messages from the Merlins to Arthur are quantum, but very short. The value of the game with n classical questions, 3 answer qubits, and with 5 players is QMA-hard to compute to within a 1/\mathrm{poly}(n) factor. Consider the ground state of the Hamiltonian encoded in the 5-qubit code. We will require the five Merlins to each have one share of these encoded qubits, so each Merlin has n qubits.
The protocol is as follows. Pick a random clause H_l on qubits i,j,k and either:

  1. energy check
    1. ask each Merlin for his share of i,j,k
    2. decode
    3. measure H_l
  2. code check
    1. ask one Merlin for his share of i,j,k
    2. ask other Merlins for their share of i
    3. verify that qubits lie in code subspace.

The intuition is that the Merlins are forced to be in a code space, with the states of 4 Merlin constraining the fifth. How is this proven?
The most general Merlin strategy is to share a state |\psi\rangle and to respond to a request for qubit i by applying a unitary U_i, or to a request for i,j,k with the unitary V_{i,j,k}. We would like to argue that any such strategy can be turned into a method for extracting all n qubits from the state |\psi\rangle.
This will be done using a method similar to a mosquito drinking blood: as it extracts blood it replaces the volume of liquid with its saliva. Here we extract a qubit i using U_i (actually U_i^{(1)}, \ldots, U_i^{(5)}), and then replace those qubits with halves of EPR pairs and then each prover applies U_i^\dagger. Actually, the other halves of the EPR pairs are never used so even a maximally mixed state would work. The point is just to put something there, effectively performing a logical SWAP.
This work also leads to a natural reformulation of the quantum PCP conjecture: Constant-factor approximations to the entangled value of a quantum game with entangled provers \omega^*(G) are QMA hard. The result is a first step in this direction by solving the case of a 1/\mathrm{poly}(n) factor.
Another consequence is for MIP and related complexity classes. MIP(c,s) refers to the class of problems with a multi-prover interactive proof with completeness c and soundness s. In this language the PCP theorem implies that NEXP=MIP(1,1/2).
In the quantum case Thomas proved that NEXP \subseteq (Q)MIP^*(1,1/2) in an earlier breakthrough. This work shows now that QMA_{EXP} is contained in MIP*(1-2^{-p}, 1-2^{-2p}), proving for the first time that entanglement increases the power of quantum proof systems. Here “proving” is in the usual complexity-theory sense, where we have to make some plausible assumption: in this case, that \text{QMA}_{\text{EXP}} \not\subseteq \text{NEXP}.
During the questions, Ronald de Wolf and Daniel Gottesman pointed out that you might be able to reduce it from 5 Merlins to 4 by using error-detecting codes, or even 3 by using qutrit quantum error-detecting codes. Or what about 2 using approximate QECC? (This last probably won’t work.)


Sergey Bravyi and Matthew Hastings. On complexity of the quantum Ising model
abstract arXiv:1410.0703

(This talk was presented by the heroic David Gosset because Sergey didn’t get a visa in time.)
The transverse Ising model (TIM) is important for several reasons. For one thing, this model is ubiquitous in the theory of phase transitions. It’s a good model of certain non-universal quantum devices like the D-wave machine. And the recent breakthrough of Cubitt and Montenaro shows that the TIM appears naturally as a possible intermediate complexity class.
We would love to understand the quantum annealing (QA) algorithm of Farhi et al., and unfortunately we won’t be able to do that here. But we can use it to help us understand a target-simulator model that lets us do reductions of various Hamiltonian complexity problems. An example, if we have a simulator QA machine that has TIM Hamiltonians, then it is unlikely that we can simulate a target QA machine that has access to 2-local Hamiltonians (which are BQP complete). The simulator TIM machine is contained in BQP \cap postBPP, which is unlikely to equal BQP.
Recall the class of “stoquastic” local Hamiltonians. These are local Hamiltonians with “no sign problem”, meaning that all off-diagonal matrix elements in the computational basis are real and non-positive. There is a natural complexity class, StoqMA, that captures the complexity of these Hamiltonians.
StoqQMA is like QMA but Arthur can only apply reversible classical gates (CNOT, Toffoli) and measure some fixed qubit in the X basis. He accepts iff the measurement outcome is +. He can use 0 and + ancillas.
StoqMA’s relation to other complexity classes:

  • P \subseteq NP \subseteq MA \subseteq StoqMA \subseteq QMA \subseteq A_0PP
  • StoqMA \subseteq SBP \subseteq PostBPP
  • SBP \subseteq AM \subseteq \Pi_2

(A_0PP and SBP are approximate counting classes.)
Main result: The local Hamiltonian problem for TIM on degree-3 graphs is StoqMA-complete. This sharpens the Cubitt and Montenaro classification by linking the TIM directly to StoqMA.
In the ferromagnetic TIM, the coupling terms are all positive and the Z-field is uniform. Another result is a polynomial-time approximation algorithm for the partition function of the ferromagnetic TIM. This generalizes and in fact makes use of a result of Jerrum & Sinclair from 1993. The run time is polynomial: O(n^{59}). Awesome. Taking the log of the partition function, one gets the free energy within an additive constant, and at low temperature, this approximates the ground state.


Rafael Chaves, Christian Majenz, Lukas Luft, Thiago O. Maciel, Dominik Janzing, Bernhard Schölkopf and David Gross.
Information-Theoretic Implications of Classical and Quantum Causal Structures
abstract arXiv:1407.3800

Given some empirically observable variables, which correlations between these are compatible with a presumed causal structure? This is a fundamental problem in science, as illustrated by the following situation. You would like to claim that smoking causes cancer. But all you have is correlation data… and correlation doesn’t imply causation. So when can we make that inference?
One of the main ideas of this work is to use entropies rather than probabilities in order to avoid headaches associated with nonconvex structure that appears in previous approaches to answering these types of questions.
Classically, Directed Acyclic Graphs (DAG) have edges encoding a causal relationship between the nodes that they connect. Conditional Independences (CIs) are encoded by a DAG; this is the causal structure. A given probability distribution is compatible with a given causal structure if it fulfills all of the CI constraints implied by a given DAG.
Marginal Scenarios: Usually, for a variety of reasons, not all variables in a DAG are observable.
Probabilities compatible with a causal structure are expressible by a polytope, within which they must fall, e.g. Bell’s theorem. However, in an example due to Geiger and Meek, which is symmetric and consisting of three unseen variables each causing two of the three observable events, A, B, and C, we have a geometry that is non-convex.
Classically, going to the entropic description, we get a description of marginal causal entropic cones in terms of the entropic Bell inequalities framework pioneered by Braunstein & Caves in 1988.
A variant of this is the common ancestor problem.
Quantumly, there is not a unique definition of what the causal structure is. What should we use? Informally it should specify the dependencies between a collection of quantum states and classical variables. We use 3 kinds of nodes,

  • Classical
  • Quantum states
  • Quantum operations, i.e. CPTP maps

Because measurement affects a quantum system, some CIs that are classically valid cannot be defined in the quantum case. But independencies still hold and we can use data processing inequality. One consequence is a strengthening of the IC inequality, and allows it to be generalized eg to quantum messages. (see also 1210.1943).


Nicolas Delfosse, Jacob Bian, Philippe Guerin and Robert Raussendorf.
Wigner function negativity and contextuality in quantum computation on rebits
abstract arXiv:1409.5170

What makes quantum computing work? Many explanations have been proffered in the past: entanglement, contextuality, superposition and interference, and even quantum discord. While it seems fair to say that we really have no idea what makes a quantum computer “work” so well, we do have some ideas for some specific models.
The main result of this talk is that contextuality is a necessary resource for universal quantum computation with magic states on rebits. Pro: these are two-level systems; Con: these are not qubits, only “rebits”.
Mermin’s square and Mermin’s star are state-independent proofs of contextuality. They look as if they will spoil the party. But in fact they help.
Previous work by Anders and Browne ’09 showed that contextuality powers measurement based quantum computation, and in 2014 Howard et al. showed that contextuality powers quantum computation with magic states.
In the setting of quantum computation by state injection, we have a restricted family of gates (e.g. Clifford gates) and we have some noisy version of a state that cannot be prepared from that gate set and starting with computational basis states. Which properties must a fiducial state possess in order to “promote” a restricted model of Clifford gates to a universal quantum computer? In work by Joe Emerson’s group, this was answered for odd-prime dimensional qudits that we need:

  • Wigner function negativity
  • Contextuality

Hudson’s theorem characterizes the set of pure states with non-negative Wigner functions: it is precisely the set of state that are Gaussian, aka stabilizer states in the discrete setting. Clifford operations cannot introduce (Wigner) negativity through gate operations, and so that makes negativity a resource for initial magic states. All of this works out very cleanly in the case of odd-prime dimensions.
Mermin’s magic square implies that not all contextuality can be attributed to states. This seems to ruin our “resource” perspective. Not all contextuality that’s present can be attributed to states. What’s worse, Mermin’s square yields a contextuality witness that classifies all 2-qubit quantum states as contextual.
To deal with this, we move to rebits, so that the density matrix \rho is restricted to be real with respect to the computational basis for all times. We also have to use a restricted gate set that is CSS-preserving, and purely Clifford. This also affects the state preparations and measurements that are allowed, but let’s skip the details.
Now there is a d=2 Hudson’s theorem: A pure n-rebit state has a non-negative Wigner function if and only if it is a CSS stabilizer state.
Wigner non-negativity then implies that Pauli measurements on \rho are described by a non-contextual hidden variable model. Equivalently contextuality implies negativity of the Wigner function, in our rebit/CSS setting.
There is no contradiction with the Mermin magic square because of the rebit property and the CSS restriction: we cannot measure ZX and XZ simultaneously. Only all-X and all-Z Pauli operators can be measured simultaneously.


Ramis Movassagh and Peter Shor.
Power law violation of the area law in quantum spin chains
abstract arXiv:1408.1657

We’ve already discussed the area law and its important implications for the efficient simulation of gapped 1-dimensional spin systems. We believe that an area law holds for higher-dimensional spin systems if the Hamiltonian is gapped. We also know that 1-dimensional spin-chains can in general be QMA-complete, but the spectral gap shrinks in the hard cases.
These authors have a series of papers on this larger project.
On a spin chain with qudits and random interactions that are projectors of rank r, they showed that

  • The ground state is frustration free but entangled when d \leq r \leq d^2/4.
  • Schmidt ranks were known, but gap/entropy weren’t known.

Then Irani and Gottesman-Hastings used the type of Hamiltonians from 1-d QMA-hardness constructions to obtain Hamiltionians with 1/poly(n) gap and O(n) entropy. The local dimension is “O(1)” but in a CS rather than physics sense (i.e. the constants are big). Some condensed matter theorists have dismissed these Hamiltonians as “fine-tuned.”
Previous work by these authors had frustration-free, 1/poly(n) gap and O(log n) entanglement, but this could still be explained away as being “critical” since this entropy scaling matched what one expected from conformal field theory.
The latest results, with entanglement entropy O(\sqrt{n}) and the same guarantees on the spectral gap, do not match any of the condensed matter theorists’ explanations. They only require spins of dimension 5, so they are much closer to a natural model.
The “Motzkin state” is the ground state in question here that they construct, as they use something called Motzkin paths to construct this Hamiltonian. The ground state of the original construction was a superposition of all Motzkin walks on a chain of length 2n.
We say a Motzkin state is the superposition of all Motzkin walks. A Motzkin walk is a walk that starts at 0, ends at 0, remains nonnegative, and goes up or down or remains level at each intervening step. Our Mozkin walks can have two kinds of up step and two kinds of down step. Our Hamiltonian introduces a penalty for violating the Motzkin rule.
The reason that these states can lead to entanglement is that the amount that it goes “up” on the left half of a cut must equal the amount that it goes “down” in the right half.
Combinatorially we know how many Motzkin walks there are of height m, number of kinds of step s, and length n. Turning the sum into an integral and doing saddle point integration we get the entanglement.
One can implement the constraints of a colored Motzkin walk with local constraints, and these become the terms in the local Hamiltonian. You can get an upper bound on the spectral gap using the variational principle. The lower bound can be obtained using similar techniques as the previous results by the authors and others.
Is there a continuum limit for these models? Can we rigorously prove these results with an external magnetic field that can eliminate the need for boundary conditions? Are there frustration-free Hamiltonians with unique ground states and no boundary conditions that violate the area law by large factors?


Hector Bombin.
Gauge color codes and Single-shot fault-tolerant quantum error correction (Plenary Talk)
abstract-89 arXiv:1311.0879 abstract-90 arXiv:1404.5504

Fault-tolerant gates are great, but they can’t be done in a transversal way and still be universal. This is the celebrated Eastin-Knill theorem. In the context of topological quantum error-correcting codes, there is a natural relaxation of the notion of transversal gates to finite-depth quantum circuits [Bravyi-Koenig’09].
Color codes are an interesting class of topological stabilizer codes that allows for a transversal implementation of a T gate in three dimensions. It generally saturates the Bravyi-Koenig bound on the Clifford hierarchy for transversality. Hector has generalized his color codes to a subsystem code version. The recent important paper by Paetznick and Reichardt introduced the notion of gauge fixing that lets us jump between different codes with different transversality restrictions, and this let’s us sidestep the Eastin-Knill theorem. The new gauge color codes can be combined with this gauge-fixing idea to move between conventional color codes and gauge color codes. In these two codes, there are two different sets of operations that together are universal.
In typical fault-tolerant methods, we make noisy syndrome measurements and we repeat them several times to avoid errors. The decoding step is a global classical calculation and the correction is a transversal quantum operation. Hector’s new paradigm of single-shot fault tolerance is a way to avoid the multiple rounds requirement in the old method.
3D color codes turn out to be single-shot fault tolerant (SSFT). This is because the gauge degrees of freedom have some redundancy and this can be used to make inferences about which stabilizer measurements might have been faulty. The notion of SSFT is closely linked to the notion of self-correction via the physical mechanism of confinement. As a simple example, consider the ferromagnetic Ising model in a symmetry-broken state below the critical temperature. Anti-aligned magnetic domains are confined in this phase. The following week Poulin gave a talk at Coogee that was skeptical about the possibility of single-shot fault tolerance. Definitely this notion needs to be made more precise.
Suppose we want to do fault-tolerant error correction in a gauge color code. A faulty gauge syndrome will be one with endpoints, and we can repair the gauge syndrome, with the branching points of the result giving the new syndrome. Importantly, the gauge syndrome is unconfined; it is random except for the fixed branching points. The effective wrong part of the gauge syndrome, however, is confined. Each connected component has branch points with neutral charge. Therefore the branching points exhibit charge confinement. This sounds intriguing, but none of the Pontiffs really understand the details.
Gauge fixing to either Z or X with give you either a transversal implementation of either T or HTH, and this lets you perform arbitrary gates on the encoded logical qubit.
Bonus Result: 3d quantum computation, with local operations and constant time overhead, but global classical computation. This uses the 3d color code for computation with a stack of 2d codes for memory; see arXiv:1412.5079 for details.


Courtney Brell.
Self-correcting stabilizer quantum memories in 3 dimensions or (slightly) less
abstract arXiv:1411.7046

We’ve previously blogged about this very interesting result here. But now there’s a problem! The thermodynamic considerations all still seem to hold, and the Hausdorff dimension of the code is still 3 or less, but the specific embedding theorem that Courtney had used previously doesn’t not apply. Therefore, it is still open if this code can be embedded in 3 dimensions with constant density. Courtney is currently working to fix the proof, but for now the embeddability of these codes is downgraded to a conjecture.
Courtney also gave a heuristic argument for why this embeddability conjecture might be true. In the limit of low lacunarity (which is basically a measure of how much a fractal violates translation invariance) there is a simple condition that says that the density of a projection stays bounded.
Two interesting tools that Courtney uses for the thermodynamic arguments that might have broader interest are the GKS inequality, which says that adding ferromagnetic interactions cannot reduce ferromagnetic order, and Merlini-Gruber duality, which is a duality similar to the Kramers-Wannier duality used to prove the phase transition in the Ising model.


Henrik Wilming, Rodrigo Gallego and Jens Eisert.
Universal operations in resource theories and local thermodynamics
abstract arXiv:1411.3754

This talk takes a resource-theory approach to thermodynamics. The goal is to extract work from a state, given various types of permitted free operations and a possibly limited ability to change the Hamiltonian by investing work. For each class of operations, the Gibbs state is a “free” state, analogous to product states in entanglement theory. The options are

  • weak thermal contact: Bringing the system in contact with the heat bath puts it into thermal equilibrium.
  • thermal operations: More generally are any energy-conserving unitary operations on the heat bath and the system. A much larger set of operations than weak thermal contact.
  • A still larger class of maps comprises all quantum channels that have the Gibbs state as a fixed point: call these Gibbs-preserving (GP) maps. GP maps can sometimes take a point below the thermal curve to a little bit above, without violating the 2nd law.

How effective these different models are at extracting work depends on the class of Hamiltonians allowed. If any Hamiltonians are allowed then there is a collapse and weak thermal contact can do as well as GP maps (and of course also thermal operations), essentially extracting all surplus free energy of a state. If we restrict the class of possible Hamiltonians then separations between these models are possible, in part because it’s harder to deal efficiently with off-diagonal terms in the density matrix.

2 Replies to “QIP 2015 zombie-blogging, Day 5”

  1. Thanks a lot, Aram and Steve, for the QIP blogging! I attented the conference, but still, found it extremely nice: now, I have the impression at least, that I retrieved something from every talk, even the ones that seemed completely out of reach to me on the spot 😉

  2. Please let me join with Cécilia in appreciating this wonderful live-blogging, and thanking all concerned (Pontiffs especially) for the effort in making it accessible to everyone.
    For me, it has been a wonderful (and continuing) enterprise to c0llate all presentation/posters relating to “entropy” and “area laws”, with a view toward grasping shared ideas and methods relating to quantum descriptions of transport processes.
    QIP live-blogging makes surveys like this feasible, thereby greatly extending the influence of QIP conferences, for which many folks (including me) are very grateful.

Leave a Reply

Your email address will not be published. Required fields are marked *