Self-correcting Fractals

A really exciting paper appeared on the arxiv today: A proposal for self-correcting stabilizer quantum memories in 3 dimensions (or slightly less), by Courtney Brell. It gives the strongest evidence yet that self-correcting quantum memories are possible in “physically realistic” three-dimensional lattice models. In particular, Courtney has constructed families of local Hamiltonians in 3D whose terms consist of X- and Z-type stabilizer generators and that show phase-transition behavior akin to the 2D Ising model for both the X- and Z-type error sectors. This result doesn’t achieve a complete theoretical solution to the question of whether self-correcting quantum memories can exist in principle, as I’ll explain below, but it makes impressive progress using a mix of rigorous analysis and physical argument.

First, what do I mean by “physically realistic”? Well, obviously I don’t mean physically realistic (without quotes)—that’s a much greater challenge. Rather, we want to abstractly characterize some features that should be shared by a physically realistic implementation, but with enough leeway that a theorist can get creative. To capture this, Courtney introduces the so-called Caltech Rules for a self-correcting quantum memory.

The phrase “the Caltech Rules” is (I believe) attributable to David Poulin. Quantum memory aficionados have been debating these rules in emails and private discussions for the last few years, but I think this is the first time someone has put them in print. As rules, they aren’t really set in stone. They consist of a list of criteria that are either necessary or seemingly necessary to avoid models that are self-correcting for trivial and unphysical reasons (e.g., scaling the coupling strengths as a function of $latex n$). In Courtney’s version of the rules, we require a model with finite-dimensional spins (so no bosonic or fermionic models allowed… this might be objectionable to some people), bounded-strength short-range interactions between the spins, a constant density of spins, a perturbatively stable degenerate ground space for the encoded states, an efficient decoding algorithm, and an exponential memory lifetime against low-temperature thermal noise. One might wish to add even more desiderata like translation-invariant couplings or a spectral gap (which is closely related to stability), but finding a self-correcting memory subject to these constraints is already a tall order. For some more discussion on these points, check out another awesome paper that came on the arxiv yesterday, an excellent review article on quantum memories at finite temperature by Ben Brown et al..

To motivate the construction, it helps to remember everyone’s favorite models, the Ising model and the Toric code. When the temperature $latex T$ is zero, it’s easy to store a classical bit using the 1D Ising model; this is just a repetition code. Similarly, the 2D toric code can store quantum information at $latex T=0$. Both of these codes become unstable as memories at $latex T\textgreater 0$ because of the presence of string-like logical operators. The physical process by which these strings are created costs some energy, but then the strings can stretch and grow without any energy cost, and thermal fluctuations alone will create enough strings in a short time to cause a decoding failure. By contrast, the 2D Ising model can store a classical bit reliably for an exponential amount of time if you encode in the total magnetization and you are below the Curie temperature. The logical operators are now membranes that cost energy to grow. Similarly, the 4D toric code has such a phase transition, and this is because the X- and Z-type errors both act analogously to 2D Ising models with membranous logical operators.

Sierpinski carpet
Sierpinski carpet, with edges placed to form a “Sierpinski graph”.

The codes that Courtney defines are called embeddable fractal product codes (EFPC). The idea is that, if a product of two 1D Ising models isn’t a 2D self-correcting model, but a product of two 2D Ising models is a self-correcting memory, then what happens if we take two 1.5D Ising models and try to make a 3D self-correcting memory? The backbone of the construction consists of fractals such as the Sierpinski carpet that have infinite ramification order, meaning that an infinite number of edges on an associated graph must be cut to split it into two infinite components. Defining an Ising model on the Sierpinski graph yields a finite-temperature phase transition for the same reason as the 2D Ising model, the Peierls argument, which is essentially a counting argument about the density of domain walls in equilibrium with fixed boundary conditions. This is exactly the kind of behavior needed for self-correction.

cut
Splitting the Sierpinski graph into two infinite components necessarily cuts an infinite number of edges.

Using the adjacency of the Sierpinski graph, the next step is to use a toric code-like set of generators on this graph, paying careful attention to the boundary conditions (in particular, plaquette terms are placed in such a way that the stabilizer group contains all the cycles that bound areas of the fractal, at any length scale). Then using homological product codes gives a natural way to combine X-like and Z-like copies of this code into a new code that naturally lives in four dimensions. Although the natural way to embed this code requires all four spatial dimensions, it turns out that a low-distortion embedding is possible with distortion bounded by a small constant, so these codes can be compressed into three dimensions while retaining the crucial locality properties.

Remarkably, this construction gives a finite-temperature phase transition for both the X- and Z-type errors. It essentially inherits this from the fact that the Ising models on the Sierpinski graph have phase transitions, and it is a very strong indication of self-correcting behavior.

However, there are some caveats. There are many logical qubits in this code (in fact, the code has constant rate), and only the qubits associated to the coarsest features of the fractal have large distance. There are many logical qubits associated to small-scale features that have small distance and create an exponential degeneracy of the ground space. With such a large degeneracy, one worries about perturbative stability in the presence of a generic local perturbation. There are a few other caveats, for example the question of efficient decoding, but to me the issue of the degeneracy is the most interesting.

Overall, this is the most exciting progress since Haah’s cubic code. I think I’m actually becoming optimistic about the possibility of self-correction. It looks like Courtney will be speaking about his paper at QIP this year, so this is yet another reason to make it to Sydney this coming January.

One Reply to “Self-correcting Fractals”

  1. On his weblog Combinatorics and More, Gil Kalai and I posted appreciations of this fine essay.
    The quantum-related essays that The Quantum Pontiffs posted in 2014 were outstanding … yet there were only seven of them.
    Please let me say that many people (including me) hope that this pace-of-excellence can be sustained, and even increased, in the coming year of 2015.
    Surely there is plenty to say, regarding the role of quantum phenomena in the 21st century!

Leave a Reply

Your email address will not be published. Required fields are marked *