QIP 2015 business meeting

A QIP 2015 epilogue: our notes from the business meeting. See also this post by Kaushik Seshadreesan.

Business Meeting Report

local organizing committee report

Finance:
$193,545 – $191,467 = $2,478 profit!
registration income: $185,340
refunds, about $3,000
external sponsorships: $30,450, and another $5k due later
total income before tax: $212,900
after tax: $193,545
Expenses:
tutorial: $5,296
main program: $47,941
banquet: $120*270 = $32,400
admin: $10k
travel support for about 41 junior researchers: $34k+
invited speakers: $45k estimated
rump session: $10,600 estimated
best student paper prize: $700
other/misc: $5k
total: $191,467
Registration:
total: 276
in 2014: 261 (early before 31 oct, 169; standard by 30 nov, 68; late by 31 dec, 29)
in 2015: 15 (on-site 10)
no-show: 10
It’s great that the budget was balanced to about 1%. However, what to do about the little bit of extra money? This is a perpetual problem. Runyao had a nice simple idea: just send it to next year’s QIP and use it for travel support for junior researchers.

Program Committee Report:

197 talk-or-poster submissions (1 withdrawn) (In Barcelona, there were 222 but this decrease probably reflects the distance to Sydney rather than anything about the field.)
3 PC members for each submission, and 25 submissions per PC member.
3 weeks of refereeing, 2 weeks of discussion.
Much faster than a typical theoretical CS conference
39 accepts, including 2 mergers 20% accept
SC invited 3 more speakers 40 talks in the program
6 of these recommended to SC for plenary status
one best student paper
There were 601 reviews, at least 3 per submission
There were 142 external reviewers and 220 external reviews.
In the first round there were 102 posters accepted. 5 poster-only submissions, all rejected talk-or-poster submissions
92 more posters 90 accepted… one out of scope and one wrong.
About 40 people withdrew their posters or simply didn’t put up a poster.
We could have accepted about 20-30 more good papers. Future choice: accept more papers? This implies parallel sessions (if we decide to accept all of those good-enough-for-QIP papers). There are pros and cons of this. Pro: more people will be happy, and better representation of research. The Con is that the community will be more split, the conference needs two medium-size lecture rooms (but what about the plenary talks?).
Anecdotal feedback from authors: some reviews were sloppy. On the other hand, with only 3 weeks of refereeing we cannot expect too much. CS reviewers are more detailed and more critical.
Do we want the 3-page abstract format? There was not much discussion on this point, but Ronald de Wolf said that the costs outweigh the benefits in his opinion. We don’t have strong opinions. Steve likes them but thinks maybe two pages would be enough. Aram thinks we could do without them, or could go to 4 pages so the physicists could use their PRL and the computer scientists could use the first 4 pages of their long version. Apparently no one in the audience had strong opinions on this either, since there was no discussion of this point. Hopefully the next PC chair at least thinks carefully about this instead of going with the default.
Do we want to have the abstracts on the website? Again, there was no discussion of this point, but RdW thinks this is generally a good idea (and us Pontiffs agree with him).
Should we make the reviews public (subject to caveats)? E.g., something like what was done for TQC 2014, where lightly edited reviews were posted on SciRate. The answer is obviously yes. 🙂 We made a case for partial open reviewing, and the slides are here. The “partial” here is important. I think a lot of people have misinterpreted our proposal and counter-proposed a compromise in which only edited summaries of reviews of reports are posted for accepted papers; this is funny because it is essentially what we did for TQC 2014! It is true that in implementing this the details are extremely important, including instructions to the PC & subreviewers and the explanations of the system to authors and the public (e.g. crucially including the fact that published reviews are not meant to explain acceptance/rejection or even to label as “good” or “bad” but rather to add perspective). Probably these points should be in a longer post.
QIP 2016 will be held in Banff, with Barry Sanders chairing the local organizing committee.
Bids for QIP 2017 are being put in by Zürich and Seattle with local organizing committee chairs of Rotem Arnon Friedman and Krysta Svore respectively. (I mention chairs because my understanding is that no matter how large the committee is, they do a \Omega(1) fraction, or even a 1-o(1) fraction, of the total work.) A straw poll of attendees shows slight favor for Zürich. Krysta said that MSR would probably still be interested in hosting in 2018, when the geographic case for Seattle would be stronger. Neither place will be as glorious as Sydney in January, but Seattle winters are pretty mild (although gray).
Stephanie Wehner presented the results of a poll that showed support for parallel sessions (about 50% of respondents favored this over options like dropping plenary talks, dropping the free afternoon, shorter talks or doing nothing). Others, like Daniel Gottesman, complained that the poll seemed to be asking how to increase the number of talks, rather than whether we should. A show of hands at this point (from an audience that by now had become pretty small, perhaps in part because there was free beer at the rump session at this point) showed an audience roughly evenly divided between supporting and opposing an increase in the number of talks. The Trinity of Pontiffs are divided on the parallel question, but of course it doesn’t have to be all or nothing. We might try an experiment doing parallel talks on one day (or even half-day) out of five, so we can see how we like it.

QIP 2015 zombie-blogging, Day 5

Today’s highlight: an algorithm running in time O(n^{59}), also known as “polynomial time” or “efficient”.

Joseph Fitzsimons and Thomas Vidick.
A multiprover interactive proof system for the local Hamiltonian problem(Plenary Talk)
abstract arXiv:1409.0260

Thomas begins by reminding everyone about what NP is and mentions that Super Mario is NP-complete. However, I solved it as a child and saved the princess from Bowser. I’m not sure what the implications of this are for P vs. NP. Knowing that Super Mario is in NP saves me from having to learn it. I just reduce it to 3-SAT.
All problems in NP have a local verification procedure because of the NP-completeness of 3-SAT. But we still need to know the whole proof, right?
There is a nice way to make proof verification completely local, and that is to introduce more than one prover. We introduce a familiar cast of characters: Merlin and Arthur. Arthur is the referee and is allowed to ask two non-communicating Merlins to prove something to him. The value is defined to be \omega(G) = \sup_{\textrm{Merlins}} \textrm{Pr}[\text{Arthur accepts}]. We need to ensure that this scheme is both sound and complete. A stronger version known as the PCP theorem has the following interesting property. Arthur can do some pre-processing and then just check a constant number of clauses to get a high probability of soundness.
The 3-local Hamiltonian problem is a well-known complete problem for QMA. Here one is given a local Hamiltonian on n qubits with a promise that the ground state energy is either less than a or greater than b, with a-b \ge 1/\mathrm{poly}(n) (the “promise gap”), and one must decide which is the case.
In the quantum setting, we allow the Merlins to share initial entanglement, but they can’t communicate after that. Now the value is denoted by \omega^*(G) = \sup_{\textrm{Merlins}} \textrm{Pr}[\text{Arthur accepts}]. The star denotes this additional quantum entanglement shared by the Merlins.
Can Arthur use the entangled Merlins to his advantage, to recognize languages beyond NP? The Peres-Mermin “magic square” game makes this unclear since at least in some cases the Merlins can “cheat” and use entanglement to increase the acceptance probability. But it was shown [KKMTV 08, IKM 09] that Arthur can still use entangled Merlins to at least recognize languages in NP. An aside: this, like most previous work, viewed the entanglement between the Merlins mostly as a drawback to be mitigated using techniques like monogamy relations that force the provers to use strategies that resemble unentangled strategies.
To illustrate the difficulties, suppose we have a 1D Hamiltonian with nearest-neighbor interactions. Suppose that these are anti-ferromagnetic so that the unique ground state of each two-qubit term is the singlet, which we say has zero energy. This is of course highly frustrated and the ground-state energy will be proportional to the number of qubits. But a naive two-prover proof system would allow us to be convinced that the ground-state energy is zero. Suppose we can split the qubits into even and odd layers that are mutually commuting. We can have Merlin-1 take the odd qubits and Merlin-2 take the even qubits. We choose a random interaction, say on sites j and j+1, and ask Merlin-1 for one of them and Merlin-2 for the other. But this doesn’t work. The Merlins need only share a singlet state which they just return regardless of which question we ask.
The main result is a five-player game for the 3-local Hamiltonian problem. The messages from the Merlins to Arthur are quantum, but very short. The value of the game with n classical questions, 3 answer qubits, and with 5 players is QMA-hard to compute to within a 1/\mathrm{poly}(n) factor. Consider the ground state of the Hamiltonian encoded in the 5-qubit code. We will require the five Merlins to each have one share of these encoded qubits, so each Merlin has n qubits.
The protocol is as follows. Pick a random clause H_l on qubits i,j,k and either:

  1. energy check
    1. ask each Merlin for his share of i,j,k
    2. decode
    3. measure H_l
  2. code check
    1. ask one Merlin for his share of i,j,k
    2. ask other Merlins for their share of i
    3. verify that qubits lie in code subspace.

The intuition is that the Merlins are forced to be in a code space, with the states of 4 Merlin constraining the fifth. How is this proven?
The most general Merlin strategy is to share a state |\psi\rangle and to respond to a request for qubit i by applying a unitary U_i, or to a request for i,j,k with the unitary V_{i,j,k}. We would like to argue that any such strategy can be turned into a method for extracting all n qubits from the state |\psi\rangle.
This will be done using a method similar to a mosquito drinking blood: as it extracts blood it replaces the volume of liquid with its saliva. Here we extract a qubit i using U_i (actually U_i^{(1)}, \ldots, U_i^{(5)}), and then replace those qubits with halves of EPR pairs and then each prover applies U_i^\dagger. Actually, the other halves of the EPR pairs are never used so even a maximally mixed state would work. The point is just to put something there, effectively performing a logical SWAP.
This work also leads to a natural reformulation of the quantum PCP conjecture: Constant-factor approximations to the entangled value of a quantum game with entangled provers \omega^*(G) are QMA hard. The result is a first step in this direction by solving the case of a 1/\mathrm{poly}(n) factor.
Another consequence is for MIP and related complexity classes. MIP(c,s) refers to the class of problems with a multi-prover interactive proof with completeness c and soundness s. In this language the PCP theorem implies that NEXP=MIP(1,1/2).
In the quantum case Thomas proved that NEXP \subseteq (Q)MIP^*(1,1/2) in an earlier breakthrough. This work shows now that QMA_{EXP} is contained in MIP*(1-2^{-p}, 1-2^{-2p}), proving for the first time that entanglement increases the power of quantum proof systems. Here “proving” is in the usual complexity-theory sense, where we have to make some plausible assumption: in this case, that \text{QMA}_{\text{EXP}} \not\subseteq \text{NEXP}.
During the questions, Ronald de Wolf and Daniel Gottesman pointed out that you might be able to reduce it from 5 Merlins to 4 by using error-detecting codes, or even 3 by using qutrit quantum error-detecting codes. Or what about 2 using approximate QECC? (This last probably won’t work.)


Sergey Bravyi and Matthew Hastings. On complexity of the quantum Ising model
abstract arXiv:1410.0703

(This talk was presented by the heroic David Gosset because Sergey didn’t get a visa in time.)
The transverse Ising model (TIM) is important for several reasons. For one thing, this model is ubiquitous in the theory of phase transitions. It’s a good model of certain non-universal quantum devices like the D-wave machine. And the recent breakthrough of Cubitt and Montenaro shows that the TIM appears naturally as a possible intermediate complexity class.
We would love to understand the quantum annealing (QA) algorithm of Farhi et al., and unfortunately we won’t be able to do that here. But we can use it to help us understand a target-simulator model that lets us do reductions of various Hamiltonian complexity problems. An example, if we have a simulator QA machine that has TIM Hamiltonians, then it is unlikely that we can simulate a target QA machine that has access to 2-local Hamiltonians (which are BQP complete). The simulator TIM machine is contained in BQP \cap postBPP, which is unlikely to equal BQP.
Recall the class of “stoquastic” local Hamiltonians. These are local Hamiltonians with “no sign problem”, meaning that all off-diagonal matrix elements in the computational basis are real and non-positive. There is a natural complexity class, StoqMA, that captures the complexity of these Hamiltonians.
StoqQMA is like QMA but Arthur can only apply reversible classical gates (CNOT, Toffoli) and measure some fixed qubit in the X basis. He accepts iff the measurement outcome is +. He can use 0 and + ancillas.
StoqMA’s relation to other complexity classes:

  • P \subseteq NP \subseteq MA \subseteq StoqMA \subseteq QMA \subseteq A_0PP
  • StoqMA \subseteq SBP \subseteq PostBPP
  • SBP \subseteq AM \subseteq \Pi_2

(A_0PP and SBP are approximate counting classes.)
Main result: The local Hamiltonian problem for TIM on degree-3 graphs is StoqMA-complete. This sharpens the Cubitt and Montenaro classification by linking the TIM directly to StoqMA.
In the ferromagnetic TIM, the coupling terms are all positive and the Z-field is uniform. Another result is a polynomial-time approximation algorithm for the partition function of the ferromagnetic TIM. This generalizes and in fact makes use of a result of Jerrum & Sinclair from 1993. The run time is polynomial: O(n^{59}). Awesome. Taking the log of the partition function, one gets the free energy within an additive constant, and at low temperature, this approximates the ground state.


Rafael Chaves, Christian Majenz, Lukas Luft, Thiago O. Maciel, Dominik Janzing, Bernhard Schölkopf and David Gross.
Information-Theoretic Implications of Classical and Quantum Causal Structures
abstract arXiv:1407.3800

Given some empirically observable variables, which correlations between these are compatible with a presumed causal structure? This is a fundamental problem in science, as illustrated by the following situation. You would like to claim that smoking causes cancer. But all you have is correlation data… and correlation doesn’t imply causation. So when can we make that inference?
One of the main ideas of this work is to use entropies rather than probabilities in order to avoid headaches associated with nonconvex structure that appears in previous approaches to answering these types of questions.
Classically, Directed Acyclic Graphs (DAG) have edges encoding a causal relationship between the nodes that they connect. Conditional Independences (CIs) are encoded by a DAG; this is the causal structure. A given probability distribution is compatible with a given causal structure if it fulfills all of the CI constraints implied by a given DAG.
Marginal Scenarios: Usually, for a variety of reasons, not all variables in a DAG are observable.
Probabilities compatible with a causal structure are expressible by a polytope, within which they must fall, e.g. Bell’s theorem. However, in an example due to Geiger and Meek, which is symmetric and consisting of three unseen variables each causing two of the three observable events, A, B, and C, we have a geometry that is non-convex.
Classically, going to the entropic description, we get a description of marginal causal entropic cones in terms of the entropic Bell inequalities framework pioneered by Braunstein & Caves in 1988.
A variant of this is the common ancestor problem.
Quantumly, there is not a unique definition of what the causal structure is. What should we use? Informally it should specify the dependencies between a collection of quantum states and classical variables. We use 3 kinds of nodes,

  • Classical
  • Quantum states
  • Quantum operations, i.e. CPTP maps

Because measurement affects a quantum system, some CIs that are classically valid cannot be defined in the quantum case. But independencies still hold and we can use data processing inequality. One consequence is a strengthening of the IC inequality, and allows it to be generalized eg to quantum messages. (see also 1210.1943).


Nicolas Delfosse, Jacob Bian, Philippe Guerin and Robert Raussendorf.
Wigner function negativity and contextuality in quantum computation on rebits
abstract arXiv:1409.5170

What makes quantum computing work? Many explanations have been proffered in the past: entanglement, contextuality, superposition and interference, and even quantum discord. While it seems fair to say that we really have no idea what makes a quantum computer “work” so well, we do have some ideas for some specific models.
The main result of this talk is that contextuality is a necessary resource for universal quantum computation with magic states on rebits. Pro: these are two-level systems; Con: these are not qubits, only “rebits”.
Mermin’s square and Mermin’s star are state-independent proofs of contextuality. They look as if they will spoil the party. But in fact they help.
Previous work by Anders and Browne ’09 showed that contextuality powers measurement based quantum computation, and in 2014 Howard et al. showed that contextuality powers quantum computation with magic states.
In the setting of quantum computation by state injection, we have a restricted family of gates (e.g. Clifford gates) and we have some noisy version of a state that cannot be prepared from that gate set and starting with computational basis states. Which properties must a fiducial state possess in order to “promote” a restricted model of Clifford gates to a universal quantum computer? In work by Joe Emerson’s group, this was answered for odd-prime dimensional qudits that we need:

  • Wigner function negativity
  • Contextuality

Hudson’s theorem characterizes the set of pure states with non-negative Wigner functions: it is precisely the set of state that are Gaussian, aka stabilizer states in the discrete setting. Clifford operations cannot introduce (Wigner) negativity through gate operations, and so that makes negativity a resource for initial magic states. All of this works out very cleanly in the case of odd-prime dimensions.
Mermin’s magic square implies that not all contextuality can be attributed to states. This seems to ruin our “resource” perspective. Not all contextuality that’s present can be attributed to states. What’s worse, Mermin’s square yields a contextuality witness that classifies all 2-qubit quantum states as contextual.
To deal with this, we move to rebits, so that the density matrix \rho is restricted to be real with respect to the computational basis for all times. We also have to use a restricted gate set that is CSS-preserving, and purely Clifford. This also affects the state preparations and measurements that are allowed, but let’s skip the details.
Now there is a d=2 Hudson’s theorem: A pure n-rebit state has a non-negative Wigner function if and only if it is a CSS stabilizer state.
Wigner non-negativity then implies that Pauli measurements on \rho are described by a non-contextual hidden variable model. Equivalently contextuality implies negativity of the Wigner function, in our rebit/CSS setting.
There is no contradiction with the Mermin magic square because of the rebit property and the CSS restriction: we cannot measure ZX and XZ simultaneously. Only all-X and all-Z Pauli operators can be measured simultaneously.


Ramis Movassagh and Peter Shor.
Power law violation of the area law in quantum spin chains
abstract arXiv:1408.1657

We’ve already discussed the area law and its important implications for the efficient simulation of gapped 1-dimensional spin systems. We believe that an area law holds for higher-dimensional spin systems if the Hamiltonian is gapped. We also know that 1-dimensional spin-chains can in general be QMA-complete, but the spectral gap shrinks in the hard cases.
These authors have a series of papers on this larger project.
On a spin chain with qudits and random interactions that are projectors of rank r, they showed that

  • The ground state is frustration free but entangled when d \leq r \leq d^2/4.
  • Schmidt ranks were known, but gap/entropy weren’t known.

Then Irani and Gottesman-Hastings used the type of Hamiltonians from 1-d QMA-hardness constructions to obtain Hamiltionians with 1/poly(n) gap and O(n) entropy. The local dimension is “O(1)” but in a CS rather than physics sense (i.e. the constants are big). Some condensed matter theorists have dismissed these Hamiltonians as “fine-tuned.”
Previous work by these authors had frustration-free, 1/poly(n) gap and O(log n) entanglement, but this could still be explained away as being “critical” since this entropy scaling matched what one expected from conformal field theory.
The latest results, with entanglement entropy O(\sqrt{n}) and the same guarantees on the spectral gap, do not match any of the condensed matter theorists’ explanations. They only require spins of dimension 5, so they are much closer to a natural model.
The “Motzkin state” is the ground state in question here that they construct, as they use something called Motzkin paths to construct this Hamiltonian. The ground state of the original construction was a superposition of all Motzkin walks on a chain of length 2n.
We say a Motzkin state is the superposition of all Motzkin walks. A Motzkin walk is a walk that starts at 0, ends at 0, remains nonnegative, and goes up or down or remains level at each intervening step. Our Mozkin walks can have two kinds of up step and two kinds of down step. Our Hamiltonian introduces a penalty for violating the Motzkin rule.
The reason that these states can lead to entanglement is that the amount that it goes “up” on the left half of a cut must equal the amount that it goes “down” in the right half.
Combinatorially we know how many Motzkin walks there are of height m, number of kinds of step s, and length n. Turning the sum into an integral and doing saddle point integration we get the entanglement.
One can implement the constraints of a colored Motzkin walk with local constraints, and these become the terms in the local Hamiltonian. You can get an upper bound on the spectral gap using the variational principle. The lower bound can be obtained using similar techniques as the previous results by the authors and others.
Is there a continuum limit for these models? Can we rigorously prove these results with an external magnetic field that can eliminate the need for boundary conditions? Are there frustration-free Hamiltonians with unique ground states and no boundary conditions that violate the area law by large factors?


Hector Bombin.
Gauge color codes and Single-shot fault-tolerant quantum error correction (Plenary Talk)
abstract-89 arXiv:1311.0879 abstract-90 arXiv:1404.5504

Fault-tolerant gates are great, but they can’t be done in a transversal way and still be universal. This is the celebrated Eastin-Knill theorem. In the context of topological quantum error-correcting codes, there is a natural relaxation of the notion of transversal gates to finite-depth quantum circuits [Bravyi-Koenig’09].
Color codes are an interesting class of topological stabilizer codes that allows for a transversal implementation of a T gate in three dimensions. It generally saturates the Bravyi-Koenig bound on the Clifford hierarchy for transversality. Hector has generalized his color codes to a subsystem code version. The recent important paper by Paetznick and Reichardt introduced the notion of gauge fixing that lets us jump between different codes with different transversality restrictions, and this let’s us sidestep the Eastin-Knill theorem. The new gauge color codes can be combined with this gauge-fixing idea to move between conventional color codes and gauge color codes. In these two codes, there are two different sets of operations that together are universal.
In typical fault-tolerant methods, we make noisy syndrome measurements and we repeat them several times to avoid errors. The decoding step is a global classical calculation and the correction is a transversal quantum operation. Hector’s new paradigm of single-shot fault tolerance is a way to avoid the multiple rounds requirement in the old method.
3D color codes turn out to be single-shot fault tolerant (SSFT). This is because the gauge degrees of freedom have some redundancy and this can be used to make inferences about which stabilizer measurements might have been faulty. The notion of SSFT is closely linked to the notion of self-correction via the physical mechanism of confinement. As a simple example, consider the ferromagnetic Ising model in a symmetry-broken state below the critical temperature. Anti-aligned magnetic domains are confined in this phase. The following week Poulin gave a talk at Coogee that was skeptical about the possibility of single-shot fault tolerance. Definitely this notion needs to be made more precise.
Suppose we want to do fault-tolerant error correction in a gauge color code. A faulty gauge syndrome will be one with endpoints, and we can repair the gauge syndrome, with the branching points of the result giving the new syndrome. Importantly, the gauge syndrome is unconfined; it is random except for the fixed branching points. The effective wrong part of the gauge syndrome, however, is confined. Each connected component has branch points with neutral charge. Therefore the branching points exhibit charge confinement. This sounds intriguing, but none of the Pontiffs really understand the details.
Gauge fixing to either Z or X with give you either a transversal implementation of either T or HTH, and this lets you perform arbitrary gates on the encoded logical qubit.
Bonus Result: 3d quantum computation, with local operations and constant time overhead, but global classical computation. This uses the 3d color code for computation with a stack of 2d codes for memory; see arXiv:1412.5079 for details.


Courtney Brell.
Self-correcting stabilizer quantum memories in 3 dimensions or (slightly) less
abstract arXiv:1411.7046

We’ve previously blogged about this very interesting result here. But now there’s a problem! The thermodynamic considerations all still seem to hold, and the Hausdorff dimension of the code is still 3 or less, but the specific embedding theorem that Courtney had used previously doesn’t not apply. Therefore, it is still open if this code can be embedded in 3 dimensions with constant density. Courtney is currently working to fix the proof, but for now the embeddability of these codes is downgraded to a conjecture.
Courtney also gave a heuristic argument for why this embeddability conjecture might be true. In the limit of low lacunarity (which is basically a measure of how much a fractal violates translation invariance) there is a simple condition that says that the density of a projection stays bounded.
Two interesting tools that Courtney uses for the thermodynamic arguments that might have broader interest are the GKS inequality, which says that adding ferromagnetic interactions cannot reduce ferromagnetic order, and Merlini-Gruber duality, which is a duality similar to the Kramers-Wannier duality used to prove the phase transition in the Ising model.


Henrik Wilming, Rodrigo Gallego and Jens Eisert.
Universal operations in resource theories and local thermodynamics
abstract arXiv:1411.3754

This talk takes a resource-theory approach to thermodynamics. The goal is to extract work from a state, given various types of permitted free operations and a possibly limited ability to change the Hamiltonian by investing work. For each class of operations, the Gibbs state is a “free” state, analogous to product states in entanglement theory. The options are

  • weak thermal contact: Bringing the system in contact with the heat bath puts it into thermal equilibrium.
  • thermal operations: More generally are any energy-conserving unitary operations on the heat bath and the system. A much larger set of operations than weak thermal contact.
  • A still larger class of maps comprises all quantum channels that have the Gibbs state as a fixed point: call these Gibbs-preserving (GP) maps. GP maps can sometimes take a point below the thermal curve to a little bit above, without violating the 2nd law.

How effective these different models are at extracting work depends on the class of Hamiltonians allowed. If any Hamiltonians are allowed then there is a collapse and weak thermal contact can do as well as GP maps (and of course also thermal operations), essentially extracting all surplus free energy of a state. If we restrict the class of possible Hamiltonians then separations between these models are possible, in part because it’s harder to deal efficiently with off-diagonal terms in the density matrix.

QIP 2015 Return of the Live-blogging, Day 1

Jan 14 update at the end.

The three Pontiffs are reunited at QIP 2015 and, having forgotten how painful liveblogging was in the past, are doing it again. This time we will aim for some slightly more selective comments.

In an ideal world the QIP PC would have written these sorts of summaries and posted them on scirate, but instead they are posted on easychair where most of you can’t access them. Sorry about this! We will argue at the business meeting for a more open refereeing process.

The first plenary talk was:

Ran Raz (Weizmann Institute)
How to Delegate Computations: The Power of No-Signaling Proofs
TR13-183

Why is the set of no-signalling distributions worth looking at? (That is, the set of conditional probability distributions p(a,b|x,y) that have well-defined marginals p(a|x) and p(b|y).) One way to think about it is as a relaxation of the set of “quantum” distributions, meaning the input-output distributions that are compatible with entangled states. The no-signalling polytope is defined by a polynomial number of linear constraints, and so is the sort of relaxation that is amenable to linear programming, whereas we don’t even know whether the quantum value of a game is computable. But is the no-signalling condition ever interesting in itself?

Raz and his coauthors (Yael Kalai and Ron Rothblum) prove a major result (which we’ll get to below) about the computational power of multi-prover proof systems where the provers have access to arbitrary non-signalling distributions. But they began by trying to prove an apparently unrelated classical crypto result. In general, multiple provers are stronger than one prover. Classically we have MIP=NEXP and IP=PSPACE, and in fact that MIP protocol just requires one round, whereas k rounds with a single prover is (roughly) within the k’th level of the polynomial hierarchy (i.e. even below PSPACE). So simulating many provers with one prover seems in general crazy.

But suppose instead the provers are computationally limited. Suppose they are strong enough for the problem to be interesting (i.e. they are much stronger than the verifier, so it is worthwhile for the verifier to delegate some nontrivial computation to them) but to weak to break some FHE (fully homomorphic encryption) scheme. This requires computational assumptions, but nothing too outlandish. Then the situation might be very different. If the verifier sends its queries using FHE, then one prover might simulate many provers without compromising security. This was the intuition of a paper from 2000, which Raz and coauthors finally are able to prove. The catch is that even though the single prover can’t break the FHE, it can let its simulated provers play according to a no-signalling distribution. (Or at least this possibility cannot be ruled out.) So proving the security of 1-prover delegated computation requires not only the computational assumptions used for FHE, but also a multi-prover proof system that is secure against no-signalling distributions.

Via this route, Raz and coauthors found themselves in QIP territory. When they started it was known that

This work nails down the complexity of the many-prover setting, showing that EXP is contained in MIPns[poly provers], so that in fact that classes are equal.

It is a nice open question whether the same is true for a constant number of provers, say 3. By comparison, three entangled provers or two classical provers are strong enough to contain NEXP.

One beautiful consequence is that optimizing a linear function over the no-signalling polytope is roughly a P-complete problem. Previously it was known that linear programming was P-complete, meaning that it was unlikely to be solvable in, say, log space. But this work shows that this is true even if the constraints are fixed once and for all, and only the objective function is varied. (And we allow error.) This is established in a recent followup paper [ECCC TR14-170] by two of the same authors.

Francois Le Gall.
Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments
abstract arXiv:1407.0085

A technical tour-de-force that we will not do justice to here. One intriguing barrier-breaking aspect of the work is that all previous algorithms for triangle finding worked equally well for the standard unweighted case as well as a weighted variant in which each edge is labeled by a number and the goal is to find a set of edges (a,b), (b,c), (c,a) whose weights add up to a particular target. Indeed this algorithm has a query complexity for the unweighted case that is known to be impossible for the weighted version. A related point is that this shows the limitations of the otherwise versatile non-adaptive learning-graph method.

Ryan O’Donnell and John Wright
Quantum Spectrum Testing
abstract arXiv:1501.05028

A classic problem: given \rho^{\otimes n} for \rho an unknown d-dimensional state, estimate some property of \rho. One problem where the answer is still shockingly unknown is to estimate \hat\rho in a way that achieves \mathbb{E} \|\rho-\hat \rho\|_1 \leq\epsilon.
Results from compressed sensing show that n = \tilde\Theta(d^2r^2) for single-copy two-outcome measurements of rank-r states with constant error, but if we allow block measurements then maybe we can do better. Perhaps O(d^2/\epsilon) is possible using using the Local Asymptotic Normality results of Guta and Kahn [0804.3876], as Hayashi has told me, but the details are – if we are feeling generous – still implicit. I hope that he, or somebody, works them out. (18 Jan update: thanks Ashley for fixing a bug in an earlier version of this.)

The current talk focuses instead on properties of the spectrum, e.g. how many copies are needed to distinguish a maximally mixed state of rank r from one of rank r+c? The symmetry of the problem (invariant under both permutations and rotations of the form U^{\otimes n}) means that we can WLOG consider “weak Schur sampling” meaning that we measure which S_n \times U_d irrep our state lies in, and output some function of this result. This irrep is described by an integer partition which, when normalized, is a sort of mangled estimate of the spectrum. It remains only to analyze the accuracy of this estimator in various ways. In many of the interesting cases we can say something nontrivial even if n= o(d^2). This involves some delicate calculations using a lot of symmetric polynomials. Some of these first steps (including many of the canonical ones worked out much earlier by people like Werner) are in my paper quant-ph/0609110 with Childs and Wocjan. But the current work goes far far beyond our old paper and introduces many new tools.

Han-Hsuan Lin and Cedric Yen-Yu Lin. Upper bounds on quantum query complexity inspired by the Elitzur-Vaidman bomb tester
abstract arXiv:1410.0932

This talk considers a new model of query complexity inspired by the Elitzur-Vaidman bomb tester. The bomb tester is a classic demonstration of quantum weirdness: You have a collection of bombs that have a detonation device so sensitive that even a single photon impacting it will set it off. Some of these bombs are live and some are duds, and you’d like to know which is which. Classically, you don’t stand a chance, but quantum mechanically, you can put a photon into a beamsplitter and place the bomb in one arm of a Mach-Zender interferometer. A dud will destroy the interference effects, and a homodyne detector will always click the same way. But you have a 50/50 chance of detecting a live bomb if the other detector clicks! There are various tricks that you can play related to the quantum Zeno effect that let you do much better than this 50% success probability.

The authors define a model of query complexity where one risks explosion for some events, and they showed that the quantum query complexity is related to the bomb query complexity by B(f) = \Theta(Q(f)^2). There were several other interesting results in this talk, but we ran out of steam as it was the last talk before lunch.

Kirsten Eisentraeger, Sean Hallgren, Alexei Kitaev and Fang Song
A quantum algorithm for computing the unit group of an arbitrary degree number field
STOC 2014

One unfortunate weakness of this work: The authors, although apparently knowledgeable about Galois theory, don’t seem to know about this link.

The unit group is a fundamental object in algebraic number theory. It comes up frequently in applications as well, and is used for fully homomorphic encryption, code obfuscation, and many other things.

My [Steve] personal way of understanding the unit group of a number field is that it is a sort of gauge group with respect to the factoring problem. The units in a ring are those numbers with multiplicative inverses. In the ring of integers, where the units are just \pm1 , we can factor composite numbers into 6 = 3 \times 2 = (-3)\times (-2). Both of these are equally valid factorizations; they are equivalent modulo units. In more complicated settings where unique factorization fails, we have factorization into prime ideals, and the group of units can in general become infinite (though always discrete).

The main result of this talk is a quantum algorithm for finding the unit group of a number field of arbitrary degree. One of the technical problems that they had to solve to get this result was to solve the hidden subgroup problem on a continuous group, namely \mathbb{R}^n.

The speaker also announced some work in progress: a quantum algorithm for the principal ideal problem and the class group problem in arbitrary degree number fields [Biasse Song ‘14]. It sounds like not all the details of this are finished yet.

Dominic Berry, Andrew Childs and Robin Kothari
Hamiltonian simulation with nearly optimal dependence on all parameters
abstract 1501.01715

Hamiltonian simulation is not only the original killer app of quantum computers, but also a key subroutine in a large and growing number of problems. I remember thinking it was pretty slick that higher-order Trotter-Suzuki could achieve a run-time of \|H\|t\text{poly}(s)(\|H\|t/\epsilon)^{o(1)} where t is the time we simulate the Hamiltonian for and s is the sparsity. I also remember believing that the known optimality thoerems for Trotter-Suzuki (sorry I can’t find the reference, but it involves decomposing e^{t(A+B)} for the free Lie algebra generated by A,B) meant that this was essentially optimal.

Fortunately, Berry, Childs and Kothari (and in other work, Cleve) weren’t so pessimistic, and have blasted past this implicit barrier. This work synthesizes everything that comes before to achieve a run-time of \tau \text{poly}\log(\tau/\epsilon) where \tau = \|H\|_{\max}st (where \|H\|_{\max} is \max_{i,j} |H_{i,j}| can be related to the earlier bounds via \|H\| \leq d \|H\|_{\max}).

One quote I liked: “but this is just a generating function for Bessel functions!” Miraculously, Dominic makes that sound encouraging. The lesson I suppose is to find an important problem (like Hamiltonian simulation) and to approach it with courage.

Salman Beigi and Amin Gohari
Wiring of No-Signaling Boxes Expands the Hypercontractivity Ribbon
abstract arXiv:1409.3665

If you have some salt water with salt concentration 0.1% and some more with concentration 0.2%, then anything in the range [0.1, 0.2] is possible, but no amount of mixing will give you even a single drop with concentration 0.05% or 0.3%, even if you start with oceans at the initial concentrations. Similarly if Alice and Bob share an unlimited number of locally unbiased random bits with correlation \eta they cannot produce even a single bit with correlation \eta' > \eta if they don’t communicate. This was famously proved by Reingold, Vadhan and Wigderson.

This talk does the same thing for no-signaling boxes. Let’s just think about noisy PR boxes to make this concrete. The exciting thing about this work is that it doesn’t just prove a no-distillation theorem but it defines an innovative new framework for doing so. The desired result feels like something from information theory, in that there is a monotonicity argument, but it needs to use quantities that do not increase with tensor product.

Here is one such quantity. Define the classical correlation measure \rho(A,B) = \max \text{Cov}(f,g) where f:A\mapsto \mathbb{R}, g:B\mapsto \mathbb{R} and each have variance 1. Properties:

  • 0 \leq \rho(A,B) \leq 1
  • \rho(A,B) =0 iff p_{AB} = p_A \cdot p_B
  • \rho(A^n, B^n) = \rho(A,B)
  • for any no-signaling box, \rho(A,B) \leq \max(\rho(A,B|X,Y), \rho(X,Y))

Together this shows that any wiring of boxes cannot increase this quantity.

The proof of this involves a more sophisticated correlation measure that is not just a single number but is a region called the hypercontractivity ribbon (originally due to [Ahlswede, Gacs ‘76]). This is defined to be the set of (\lambda_1, \lambda_2) such that for any f,g we have
\mathbb{E}[f_A g_B] \leq \|f_A\|_{\frac{1}{\lambda_1}} \|g_B\|_{\frac{1}{\lambda_2}}
A remarkable result of [Nair ‘14] is that this is equivalent to the condition that
I(U;AB) \geq \lambda_1 I(U:A) + \lambda_2 I(U:B)
for any extension of the distribution on AB to one on ABU.

Some properties.

  • The ribbon is [0,1]\times [0,1] iff A,B are independent.
  • It is stable under tensor power.
  • monotonicity: local operations on A,B enlarge R

For boxes define R(A,B|X,Y) = \cap_{x,y} R(A,B|x,y). The main theorem is then that rewiring never shrinks hypercontractivity ribbon. And as a result, PR box noise cannot be reduced.

These techniques are beautiful and seem as though they should have further application.

Masahito Hayashi
Estimation of group action with energy constraint
abstract arXiv:1209.3463

Your humble bloggers were at this point also facing an energy constraint which limited our ability to estimate what happened. The setting is that you pick a state, nature applies a unitary (specifically from a group representation) and then you pick a measurement and try to minimize the expected error in estimating the group element corresponding to what nature did. The upshot is that entanglement seems to give a quadratic improvement in metrology. Noise (generally) destroys this. This talk showed that a natural energy constraint on the input also destroys this. One interesting question from Andreas Winter was about what happens when energy constraints are applied also to the measurement, along the lines of 1211.2101 by Navascues and Popescu.

Jan 14 update: forgot one! Sorry Ashley.

Ashley Montanaro
Quantum pattern matching fast on average
abstract
arXiv:1408.1816

Continuing the theme of producing shocking and sometimes superpolynomial speedups to average-case problems, Ashley shows that finding a random pattern of length m in a random text of length n can be done in quantum time \tilde O(\sqrt{n/m}\exp(\sqrt{\log m})). Here “random” means something subtle. The text is uniformly random and the pattern is either uniformly random (in the “no” case) or is a random substring of the text (in the “yes” case). There is also a higher-dimensional generalization of the result.

One exciting thing about this is that it is a fairly natural application of Kuperberg’s algorithm for the dihedral-group HSP; in fact the first such application, although Kuperberg’s original paper does mention a much less natural such variant. (correction: not really the first – see Andrew’s comment below.)
It is interesting to think about this result in the context of the general question about quantum speedups for promise problems. It has long been known that query complexity cannot be improved by more than a polynomial (perhaps quadratic) factor for total functions. The dramatic speedups for things like the HSP, welded trees and even more contrived problems must then use the fact that they work for partial functions, and indeed even “structured” functions. Pattern matching is of course a total function, but not one that will ever be hard on average over a distribution with, say, i.i.d. inputs. Unless the pattern is somehow planted in the text, most distributions simply fail to match with overwhelming probability. It is funny that for i.i.d. bit strings this stops being true when m = O(\log n), which is almost exactly when Ashley’s speedup becomes merely quadratic. So pattern matching is a total function whose hard distributions all look “partial” in some way, at least when quantum speedups are possible. This is somewhat vague, and it may be that some paper out there expresses the idea more clearly.
Part of the strength of this paper is then finding a problem where the promise is so natural. It gives me new hope for the future relevance of things like the HSP.

Elsevier again, and collective action

We all know about higher education being squeezed financially. Government support is falling and tuition is going up. We see academic jobs getting scarcer, and more temporary. The pressure for research to focus on the short term is going up. Some of these changes may be fair, since society always has to balance its immediate priorities against its long-term progress. At other times, like when comparing the NSF’s $7.6 billion FY2014 budget request to the ongoing travesty that is military procurement, it does feel as though we are eating our seed corn for not very wise reasons.
Against this backdrop, the travesty that is scientific publishing may feel like small potatoes. But now we are starting to find out just how many potatoes. Tim Gowers has been doing an impressive job of digging up exactly how much various British universities pay for their Elsevier subscriptions. Here is his current list. Just to pick one random example, the University of Bristol (my former employer), currently pays Elsevier a little over 800,000 pounds (currently $1.35M) for a year’s access to their journals. Presumably almost all research universities pay comparable amounts.
To put this number in perspective, let’s compare it not to the F-35, but to something that delivers similar value: arxiv.org. Its total budget for 2014 is about 750,000 US dollars (depending on how you count overhead), and of course this includes access for the entire world, not only the University of Bristol. To be fair, ScienceDirect has about 12 times as many articles and the median quality is probably higher. But overall it is clearly vastly more expensive for society to have its researchers communicate in this way.
Another way to view the £800,000 price tag is in terms of the salaries of about 40 lecturers (\approx assistant professors), or some equivalent mix of administrators, lecturers and full professors. The problem is that these are not substitutes. If Bristol hired 40 lecturers, they would not each spend one month per year building nearly-free open-access platforms and convincing the world to use them; they would go about getting grants, recruiting grad students and publishing in the usual venues. There are problems of collective action, of the path dependence that comes with a reputation economy and of the diffuse costs and concentrated benefits of the current system.
I wish I could end with some more positive things to say. I think at least for now it is worth getting across the idea that there is a crisis, and that we should all do what we can to help with it, especially when we can do so without personal cost. In this way, we can hopefully create new social norms. For example, it is happily unconventional now to not post work on arxiv.org, and I hope that it comes to be seen also as unethical. In the past, it was common to debate whether QIP should have published proceedings. Now major CS conferences are cutting themselves loose from parasitic professional societies (see in particular the 3% vote in favor of the status quo) and QIP has begun debating whether to require all submissions be accompanied by arxiv posts (although this is of course not at all clear-cut). If we cannot have a revolution, hopefully we can at least figure out an evolutionary change towards a better scientific publishing system. And then we can try to improve military procurement.

TQC 2014!

While many of us are just recovering from QIP, I want to mention that the submission deadline is looming for the conference TQC, which perhaps should be called TQCCC because its full name is Theory of Quantum Computation, Communication and Cryptography. Perhaps this isn’t done because it would make the conference seem too classical? But TQQQC wouldn’t work so well either. I digress.
The key thing I want to mention is the imminent 15 Feb submission deadline.
I also want to mention that TQC is continuing to stay ahead of the curve with its open-access author-keeps-copyright proceedings, and this year with some limited open reviewing (details here). I recently spoke to a doctor who complained that despite even her Harvard Medical affiliation, she couldn’t access many relevant journals online. While results of taxpayer-funded research on drug efficacy, new treatments and risk factors remain locked up, at least our community is ensuring that anyone wanting to work on the PPT bound entanglement conjecture will be able to catch up to the research frontier without having to pay $39.95 per article.
One nice feature about these proceedings is that if you later want to publish a longer version of your submission in a journal, then you will not face any barriers from TQC. I also want to explicitly address one concern that some have raised about TQC, which is that the published proceedings will prevent authors from publishing their work elsewhere. For many, the open access proceedings will be a welcome departure from the usual exploitative policies of not only commercial publishers like Elsevier, but also the academic societies like ACM and IEEE. But I know that others will say “I’m happy to sign your petitions, but at the end of the day, I still want to submit my result to PRL” and who am I to argue with this?
So I want to stress that submitting to TQC does not prevent submitting your results elsewhere, e.g. to PRL. If you publish one version in TQC and a substantially different version (i.e. with substantial new material) in PRL, then not only is TQC fine with it, but it is compatible with APS policy which I am quoting here:

Similar exceptions [to the prohibition against double publishing] are generally made for work disclosed earlier in abbreviated or preliminary form in published conference proceedings. In all such cases, however, authors should be certain that the paper submitted to the archival
journal does not contain verbatim text, identical figures or tables, or other copyrighted materials which were part of the earlier publications, without providing a copy of written permission from the copyright holder. [ed: TQC doesn’t require copyright transfer, because it’s not run by people who want to exploit you, so you’re all set here] The paper must also contain a substantial body of new material that was not included in the prior disclosure. Earlier relevant published material should, of course, always be clearly referenced in the new submission.

I cannot help but mention that even this document (the “APS Policy on Prior Disclosure”) is behind a paywall and will cost you $25 if your library doesn’t subscribe. But if you really want to support this machine and submit to PRL or anywhere else (and enjoy another round of refereeing), TQC will not get in your way.
Part of what makes this easy is TQC’s civilized copyright policy (i.e. you keep it). By contrast, Thomas and Umesh had a more difficult, though eventually resolved, situation when combining STOC/FOCS with Nature.

QIP 2014 accepted talks

This Thanksgiving, even if we can’t all be fortunate enough to be presenting a talk at QIP, we can be thankful for being part of a vibrant research community with so many different lines of work going on. The QIP 2014 accepted talks are now posted with 36 out of 222 accepted. While many of the hot topics of yesteryear (hidden subgroup problem, capacity of the depolarizing channel) have fallen by the wayside, there is still good work happening in the old core topics (algorithms, information theory, complexity, coding, Bell inequalities) and in topics that have moved into the mainstream (untrusted devices, topological order, Hamiltonian complexity).

Here is a list of talks, loosely categorized by topic (inspired by Thomas’s list from last year). I’m pretty sad about missing my first QIP since I joined the field, because its unusually late timing overlaps the first week of the semester at MIT. But in advance of the talks, I’ll write a few words (in italics) about what I would be excited about hearing if I were there.

Quantum Shannon Theory

There are a lot of new entropies! Some of these may be useful – at first for tackling strong converses, but eventually maybe for other applications as well. Others may, well, just contribute to the entropy of the universe. The bounds on entanglement rate of Hamiltonians are exciting, and looking at them, I wonder why it took so long for us to find them.

1a. A new quantum generalization of the Rényi divergence with applications to the strong converse in quantum channel coding
Frédéric Dupuis, Serge Fehr, Martin Müller-Lennert, Oleg Szehr, Marco Tomamichel, Mark Wilde, Andreas Winter and Dong Yang. 1306.3142 1306.1586
merged with
1b. Quantum hypothesis testing and the operational interpretation of the quantum Renyi divergences
Milan Mosonyi and Tomohiro Ogawa. 1309.3228

25. Zero-error source-channel coding with entanglement
Jop Briet, Harry Buhrman, Monique Laurent, Teresa Piovesan and Giannicola Scarpa. 1308.4283

28. Bound entangled states with secret key and their classical counterpart
Maris Ozols, Graeme Smith and John A. Smolin. 1305.0848
It’s funny how bound key is a topic for quant-ph, even though it is something that is in principle purely a classical question. I think this probably is because of Charlie’s influence. (Note that this particular paper is mostly quantum.)

3a. Entanglement rates and area laws
Michaël Mariën, Karel Van Acoleyen and Frank Verstraete. 1304.5931 (This one could also be put in the condensed-matter category.)
merged with
3b. Quantum skew divergence
Koenraad Audenaert. 1304.5935

22. Quantum subdivision capacities and continuous-time quantum coding
Alexander Müller-Hermes, David Reeb and Michael Wolf. 1310.2856

Quantum Algorithms

This first paper is something I tried (unsuccessfully, needless to say) to disprove for a long time. I still think that this paper contains yet-undigested clues about the difficulties of non-FT simulations.

2a. Exponential improvement in precision for Hamiltonian-evolution simulation
Dominic Berry, Richard Cleve and Rolando Somma. 1308.5424
merged with
2b. Quantum simulation of sparse Hamiltonians and continuous queries with optimal error dependence
Andrew Childs and Robin Kothari.
update: The papers appear now to be merged. The joint (five-author) paper is 1312.1414.

35. Nested quantum walk
Andrew Childs, Stacey Jeffery, Robin Kothari and Frederic Magniez.
(not sure about arxiv # – maybe this is a generalization of 1302.7316?)

Quantum games: from Bell inequalities to Tsirelson inequalities

It is interesting how the first generation of quantum information results is about showing the power of entanglement, and now we are all trying to limit the power of entanglement. These papers are, in a sense, about toy problems. But I think the math of Tsirelson-type inequalities is going to be important in the future. For example, the monogamy bounds that I’ve recently become obsessed with can be seen as upper bounds on the entangled value of symmetrized games.
4a. Binary constraint system games and locally commutative reductions
Zhengfeng Ji. 1310.3794
merged with
4b. Characterization of binary constraint system games
Richard Cleve and Rajat Mittal. 1209.2729

20. A parallel repetition theorem for entangled projection games
Irit Dinur, David Steurer and Thomas Vidick. 1310.4113

33. Parallel repetition of entangled games with exponential decay via the superposed information cost
André Chailloux and Giannicola Scarpa. 1310.7787

Untrusted randomness generation

Somehow self-testing has exploded! There is a lot of information theory here, but the convex geometry of conditional probability distributions also is relevant, and it will be interesting to see more connections here in the future.

5a. Self-testing quantum dice certified by an uncertainty principle
Carl Miller and Yaoyun Shi.
merged with
5b. Robust device-independent randomness amplification from any min-entropy source
Kai-Min Chung, Yaoyun Shi and Xiaodi Wu.

19. Infinite randomness expansion and amplification with a constant number of devices
Matthew Coudron and Henry Yuen. 1310.6755

29. Robust device-independent randomness amplification with few devices
Fernando Brandao, Ravishankar Ramanathan, Andrzej Grudka, Karol Horodecki, Michal Horodecki and Pawel Horodecki. 1310.4544

The fuzzy area between quantum complexity theory, quantum algorithms and classical simulation of quantum systems. (But not Hamiltonian complexity.)

I had a bit of trouble categorizing these, and also in deciding how surprising I should find each of the results. I am also somewhat embarrassed about still not really knowing exactly what a quantum double is.

6. Quantum interactive proofs and the complexity of entanglement detection
Kevin Milner, Gus Gutoski, Patrick Hayden and Mark Wilde. 1308.5788

7. Quantum Fourier transforms and the complexity of link invariants for quantum doubles of finite groups
Hari Krovi and Alexander Russell. 1210.1550

16. Purifications of multipartite states: limitations and constructive methods
Gemma De Las Cuevas, Norbert Schuch, David Pérez-García and J. Ignacio Cirac.

Hamiltonian complexity started as a branch of quantum complexity theory but by now has mostly devoured its host

A lot of exciting results. The poly-time algorithm for 1D Hamiltonians appears not quite ready for practice yet, but I think it is close. The Cubitt-Montanaro classification theorem brings new focus to transverse-field Ising, and to the weird world of stoquastic Hamiltonians (along which lines I think the strengths of stoquastic adiabatic evolution deserve more attention). The other papers each do more or less what we expect, but introduce a lot of technical tools that will likely see more use in the coming years.

13. A polynomial-time algorithm for the ground state of 1D gapped local Hamiltonians
Zeph Landau, Umesh Vazirani and Thomas Vidick. 1307.5143

15. Classification of the complexity of local Hamiltonian problems
Toby Cubitt and Ashley Montanaro. 1311.3161

30. Undecidability of the spectral gap
Toby Cubitt, David Pérez-García and Michael Wolf.

23. The Bose-Hubbard model is QMA-complete
Andrew M. Childs, David Gosset and Zak Webb. 1311.3297

24. Quantum 3-SAT is QMA1-complete
David Gosset and Daniel Nagaj. 1302.0290

26. Quantum locally testable codes
Dorit Aharonov and Lior Eldar. (also QECC) 1310.5664

Codes with spatially local generators aka topological order aka quantum Kitaev theory

If a theorist is going to make some awesome contribution to building a quantum computer, it will probably be via this category. Yoshida’s paper is very exciting, although I think the big breakthroughs here were in Haah’s still underappreciated masterpiece. Kim’s work gives operational meaning to the topological entanglement entropy, a quantity I had always viewed with perhaps undeserved skepticism. It too was partially anticipated by an earlier paper, by Osborne.

8. Classical and quantum fractal code
Beni Yoshida. I think this title is a QIP-friendly rebranding of 1302.6248

21. Long-range entanglement is necessary for a topological storage of information
Isaac Kim. 1304.3925

Bit commitment is still impossible (sorry Horace Yuen) but information-theoretic two-party crypto is alive and well

The math in this area is getting nicer, and the protocols more realistic. The most unrealistic thing about two-party crypto is probably the idea that it would ever be used, when people either don’t care about security or don’t even trust NIST not to be a tool of the NSA.

10. Entanglement sampling and applications
Frédéric Dupuis, Omar Fawzi and Stephanie Wehner. 1305.1316

36. Single-shot security for one-time memories in the isolated qubits model
Yi-Kai Liu. 1304.5007

Communication complexity

It is interesting how quantum and classical techniques are not so far apart for many of these problems, in part because classical TCS is approaching so many problem using norms, SDPs, Banach spaces, random matrices, etc.

12. Efficient quantum protocols for XOR functions
Shengyu Zhang. 1307.6738

9. Noisy Interactive quantum communication
Gilles Brassard, Ashwin Nayak, Alain Tapp, Dave Touchette and Falk Unger. 1309.2643 (also info th / coding)

Foundations

I hate to be a Philistine, but I wonder what disaster would befall us if there WERE a psi-epistemic model that worked. Apart from being able to prove false statements. Maybe a commenter can help?

14. No psi-epistemic model can explain the indistinguishability of quantum states
Eric Cavalcanti, Jonathan Barrett, Raymond Lal and Owen Maroney. 1310.8302

??? but it’s from THE RAT

update: A source on the PC says that this is an intriguing mixture of foundations and Bell inequalities, again in the “Tsirelson regime” of exploring the boundary between quantum and non-signaling.
17. Almost quantum
Miguel Navascues, Yelena Guryanova, Matty Hoban and Antonio Acín.

FTQC/QECC/papers Kalai should read 🙂

I love the part where Daniel promises not to cheat. Even though I am not actively researching in this area, I think the race between surface codes and concatenated codes is pretty exciting.

18. What is the overhead required for fault-tolerant quantum computation?
Daniel Gottesman. 1310.2984

27. Universal fault-tolerant quantum computation with only transversal gates and error correction
Adam Paetznick and Ben Reichardt. 1304.3709

Once we equilibrate, we will still spend a fraction of our time discussing thermodynamics and quantum Markov chains

I love just how diverse and deep this category is. There are many specific questions that would be great to know about, and the fact that big general questions are still being solved is a sign of how far we still have to go. I enjoyed seeing the Brown-Fawzi paper solve problems that stumped me in my earlier work on the subject, and I was also impressed by the Cubitt et al paper being able to make a new and basic statement about classical Markov chains. The other two made me happy through their “the more entropies the better” approach to the world.

31. An improved Landauer Principle with finite-size corrections and applications to statistical physics
David Reeb and Michael M. Wolf. 1306.4352

32. The second laws of quantum thermodynamics
Fernando Brandao, Michal Horodecki, Jonathan Oppenheim, Nelly Ng and Stephanie Wehner. 1305.5278

34. Decoupling with random quantum circuits
Winton Brown and Omar Fawzi. 1307.0632

11. Stability of local quantum dissipative systems
Toby Cubitt, Angelo Lucia, Spyridon Michalakis and David Pérez-García. 1303.4744

happy deadline everyone!

The main proof in one of my QIP submissions has developed a giant hole.
Hopefully the US Congress does a better job with its own, somewhat higher stakes, deadline. In many ways their job is easier. They can just submit the same thing as last year and they don’t need to compress their result into three pages. Unfortunately, things are not looking good for them either!
Good luck to all of us.

Are we ready for Venture Qapital?

From cnet and via Matt Liefer, comes news of a new venture capital firm, known as The Quantum Wave Fund. According to their website:

Quantum Wave Fund is a venture capital firm focused on seeking out early stage private companies with breakthrough quantum technology. Our mission is to help these companies capitalize on their opportunities and provide a platform for our investors to participate in the quantum technology wave.

The cnet article clarifies that “quantum technology” means “Security, new measurement devices, and new materials,” which seems about right for what we can expect to meaningfully commercialize in the near term. In fact, two companies (ID Quantique and
MagiQ) are already doing so. However, I think it is significant that ID Quantique’s first listed product uses AES-256 (but can be upgraded to use QKD) and MagiQ’s product list first describes technologies like waveform generation and single-photon detection before advertising their QKD technology at the bottom of the page.
It’ll be interesting to see where this goes. Already it has exposed several areas of my own ignorance. For example, from the internet, I learned that VCs want to get their money back in 10-12 years, which gives an estimate for how near-term the technologies are that we can expect investments in. Another area which I know little about, but is harder to google, is exactly what sort of commercial applications there are for the many technologies that are related to quantum information, such as precision measurement and timing. This question is, I think, going to be an increasingly important one for all of us.

QIP 2015??

The following is a public service announcement from the QIP steering committee. (The “service” is from you, by the way, not the committee.)

Have you ever wondered why the Quantum Information Processing conference seems to travel everywhere except to your hometown? No need to wonder anymore. It’s because you haven’t organized it yet!
The QIP steering committee would like to encourage anyone tentatively interested in hosting QIP 2015 to register their interest with one of us by email prior to QIP 2013 in Beijing. The way it works is that potential hosts present their cases at this year’s QIP, there is an informal poll of the QIP audience, and then soon after, the steering committee chooses between proposals.
Don’t delay!

By the way, QIP 2014 is in Barcelona, so a loose tradition would argue that 2015 should be in North (or South!) America. Why? Some might say fairness or precedent, but for our community, perhaps a better reason is to keep the Fourier transform nice and peaked.