Having recently rediscovered arxiv.org/submit, I thought I’d mention a few papers to come out of team UW.
Two particularly exciting single-author papers are from students here.
Kamil Michnicki has developed a 3-d stabilizer code on qubits with an energy barrier that scales as . By contrast, such a result is impossible in 2-d, and the best energy barrier previously obtained in 3-d was from Haah’s breakthrough cubic code. Sadly, this code appears not to have the thermal stability properties of the 4-d toric code, but it nevertheless is an exciting step towards a self-correcting quantum memory.
1208.3496: 3-d quantum stabilizer codes with a power law energy barrier
Kamil Michnicki
David Rosenbaum has written a mostly classical algorithms paper about the old problem of group isomorphism: given two groups specified by their multiplication tables, determine whether they are isomorphic. The problem reduces to graph isomorphism, but may be strictly easier. Since any group with has a generating set of size , it follows that the problem can be solved in time . While faster algorithm have been given in many special cases, the trivial upper bound of has resisted attack for decades. See Gödel’s Lost Letter for some discussion. The particular classes of groups considered to be hardest have been the nilpotent (or more generally solvable) groups, since paradoxically the rigidity of highly non-abelian groups (e.g. simple groups) makes them easier to address. David found a polynomial speedup for solvable groups, thus making the first progress on this problem since the initial algorithms.
1205.0642: Breaking the Barrier for Solvable-Group Isomorphism
David Rosenbaum
Lukas Svec (together with collaborators) also has a nice way of improving the Gottesman-Knill simulations that have been so effective in estimating FTQC thresholds. Gottesman-Knill allows mixtures of Clifford unitaries to be simulated classically, which seems as thought it should be only be effective for simulating unital noise. However, throwing away a qubit and replacing it with the state can also be encompassed within the Gottesman-Knill approach. This insight allows them to give much better simulations of amplitude-damping noise than any previous approach.
1207.0046: Approximation of real error channels by Clifford channels and Pauli measurements
Mauricio Gutiérrez, Lukas Svec, Alexander Vargo, Kenneth R. Brown
There have also been two papers about the possibilities of two dimensions. David has a paper explaining how a general circuit with arbitrary two-qubit interactions on qubits can be simulated in a 2-d architecture by using qubits. If only gates happen per time step, then qubits suffice. The key trick is to use classical control to perform teleportation chains, an idea of whose provenance I’m unclear on, but which is based in part on MBQC and in part on a remarkable paper of Terhal and Divincenzo.
1205.0036: Optimal Quantum Circuits for Nearest-Neighbor Architectures
David Rosenbaum
Examining particular algorithms can enable more dramatic speedups, and by examining Shor’s algorithm, Paul Pham was able to reduce the depth to polylogarithmic, surprisingly finding an improved implementation of the most well-studied of quantum algorithms.
1207.6655: A 2D Nearest-Neighbor Quantum Architecture for Factoring
Paul Pham, Krysta M. Svore
David and I also have a joint paper on an alternate oracle model in which one input to the oracle is supplied by the user and a second input is random noise. While in some cases (e.g. a Grover oracle that misfires) this does not lead to quantum advantages, we find that in other cases, quantum computers can solve problems in a single query that classical computers cannot solve with unlimited queries. Along the way, we address the question of when some number of (quantum or classical) queries yield no useful information at all about the answer to an oracle problem.
1111.1462: Uselessness for an Oracle Model with Internal Randomness
David Rosenbaum, Aram W. Harrow
My co-blogger Steve has also been active. Steve and his co-authors (does that make them my step-co-authors?) have written perhaps the definitive work on how to estimate approximately low-rank density matrices using a small number of measurements.
1205.2300: Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators
Steven T. Flammia, David Gross, Yi-Kai Liu, Jens Eisert
Steve, together with Ghost Pontiff Dave and Dave’s former student Gregory Crosswhite have also posted
1207.2769: Adiabatic Quantum Transistors
Dave Bacon, Steven T. Flammia, Gregory M. Crosswhite
This paper proposes a deeply innovative approach to quantum computing, in which one adiabatically transforms one simple spatially-local Hamiltonian to another. Unlike previous approaches, it seems to have a chance of having some compelling fault-tolerance properties, although analyzing this remains challenging.
Steve and I also have a brief note (arXiv:1204.3404) relevant to my ongoing debate with Gil Kalai (see here, here, here, here, here, here or here) in which we point out counterexamples to one of Gil’s conjectures. This post specifically contains more discussion of the issue.
Finally, I’ve been clearing out a lot of my unpublished backlog this year.
My co-authors and I wrote a short paper explaining the main ideas behind the superactivation of zero-error capacity. The principle is similar to that found in all of the additivity violations based on random quantum channels: we choose a correlated distribution over channels in a way that forces to have some desired behavior (e.g. when acting on a particular maximally entangled state). At the same time, apart from this constraint, the distribution is as random as possible. Hopefully we can then show that any single-copy use of or has low capacity, or in our case, zero zero-error capacity. In our case, there are a few twists, since we are talking about zero-error capacity, which is a fragile property more suited to algebraic geometry than the usual approximate techniques in information theory. On the other hand, this means that at many points we can show that properties hold with probability 1. The other nontrivial twist is that we have to show that not only has zero zero-error capacity (yeah, I know it’s a ridiculous expression) but does for all . This can be done with some more algebraic geometry (which is a fancy way of saying that the simultaneous zeroes of a set of polynomials has measure equal to either 0 or 1) as well as the fact that the property of being an unextendible product bases is stable under tensor product.
1109.0540:Entanglement can completely defeat quantum noise
Jianxin Chen, Toby S. Cubitt, Aram W. Harrow, Graeme Smith
One paper that was a fun bridge-building exercise (with a nice shout out from Umesh Vazirani/BILL GASARCH) was a project with quantum information superstar Fernando Brandão as well as a pack of classical CS theorists. My part of the paper involved connections between QMA(2) and optimizing polynomials over . For example, if are the rows of a matrix , then define . Taking the fourth power we obtain the maximum energy attainable by product states under the Hamiltonian . Thus, hardness results and algorithms can be ported in both directions. One natural algorithm is called “the Lasserre SDP hierarchy” classically and “optimizing over -extendable states” quantumly, but in fact these are essentially the same thing (an observation dating back to a 2003 paper of Doherty, Parrilo, and Spedalieri). There is much more to the paper, but I’ll leave it at that for now.
1205.4484: Hypercontractivity, Sum-of-Squares Proofs, and their Applications
Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan A. Kelner, David Steurer, Yuan Zhou
Another big collaboration taking me out of my usual areas was this paper on quantum architecture. Suppose that our quantum computer is comprised of many small nodes (say ion traps), connected by long-range links (say optical fibers), as has been recently advocated. This computer would not be stuck with a 2-d topology, but could be connected in any reasonably low-degree configuration. Our paper shows that a hypercube topology (among many other possibilities) is enough to simulate general quantum circuits. This enables parallelized versions of Grover search that finally (in my opinion) address the problem raised by Grover and Rudolph about the memory requirements for the “best” known collision and element-distinctness algorithms. As a result, we find space-time tradeoffs (assuming this hypercube topology) for collision and element distinctness of and respectively.
1207.2307: Efficient Distributed Quantum Computing
Robert Beals, Stephen Brierley, Oliver Gray, Aram W. Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, Mark Stather
The next paper was on more familiar territory. Together with my former PhD student Richard Low (and building on the work of Dahlsten, Oliveira and Plenio), I proved that random quantum circuits are approximate unitary 2-designs in 0802.1919. Later Fernando Brandão and Michal Horodecki improved this to show that random quantum circuits are approximate unitary 3-designs in 1010.3654 (achieving a sweet oracle speedup in the process). Teaming up with them I expected maybe to reach 5-designs, but in the end we were able to get arbitrary -designs on qubits with circuits of length .
1208.0692 Local random quantum circuits are approximate polynomial-designs
Fernando G. S. L. Brandão, Aram W. Harrow, Michal Horodecki
Finally, one more paper came out of my classical CS dabbling. Together with classical (but quantum curious) theorists Alexandra Kolla and Leonard Schulman, we found a cute combinatorial result in our failed bid to refute the unique games conjecture on the hypercube. Our result concerns what are called maximal functions. Hardy and Littlewood introduced these by talking about cricket; here is a contemporary version. Imagine that you are a Red Sox fan whose happiness at any given time depends on what fraction of the last games the Red Sox have won against the Yankees. Fortunately, you are willing to choose differently from day to day in order to maximize your happiness. For example, if the Red Sox won the last game, you take and your happiness is 100%. If they won 3 out of the last 5 games, you could take and obtain happiness 60%. The maximal operator takes the win-loss series and transforms it into the happiness function. (A similar principle is at work when the loser of rock-paper-scissors proposes best 3-out-of-5, and then best 4-out-of-7, etc.) In our paper, we bound how much the maximal operator can increase the 2-norm of a function when it acts on functions on the hypercube, and the maximization is taken not over intervals, but over Hamming spheres in the hypercube. Along the way, we prove some bounds on Krawtchouk polynomials that seem so simple and useful I feel we must have overlooked some existing paper that already proves them.
1209.4148: Dimension-free L2 maximal inequality for spherical means in the hypercube
Aram W. Harrow, Alexandra Kolla, Leonard J. Schulman