Oldschool, Contradiction, and a Strawman

Today, Robert Alicki and Michale Horedecki have an interesting preprint on the arxiv, quant-ph/0603260: “Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems.” (Now where have I seen that word “quantum hard drive” before?) As you can imagine, the paper cuts closely to my own heart, so I thought I’d put some comments on the paper here.
The question the paper addresses is right there in the title: is it possible to store quantum information in a system in a way similar to how a hard drive stores its information in ferromagnetic magnetic domains? The authors clame to show that this is not possible (On a personal note, I cannot read the words “no-go theorem” without thinking of John Bell’s famous quote: “what is proved by impossibility proofs is lack of imagination.” Of course, this quote is also ironic: Bell is most famous for proving the impossibility of using local hidden variable theories to simulate quantum correlations! And of course I in no way think that Alicki or Hordecki suffer from a lack of creativity, that’s for sure!)
The paper is structured into three parts, which I will call oldschool, contradiction, and strawman.
In the oldschool section of the paper the authors discuss why it is possible to store classical information in a ferromagnetic domain. They use the Curie-Weiss (CW) model of a ferromagnet, which is not exactly realistic, but captures (for ferromagnetics of high enough dimension) much of the essential ideas of real ferromagnets. In the Curie-Weiss model, we have classical spins which point either up or down. The total magnetization is then the sum of all of the spins, counting +1 for spin up and -1 for spin down. The energy of a particular configuration in the CW model is given by negative of the quantity of the total magentization squared times some arbitrary coupling constant. This means that in the CW model, each spins is coupled to every other spin via a ferromagnetic Ising interaction. Ferromagnetic Ising interactions produce an energetically favorable condition to alligning the two spins involved in the interaction. Thus the ground state of the CW model is two-fold degenerate and is the all spins pointing up and the all spins pointing down states. Further there is a big barrier of energy for flipping between these two ground states. In their paper Alicki and Horodecki present the old school argument about why information related to the total magnetization is resistent to the effects of interacting with a thermal environment. They provide a heuristic argument which basically says that the time it will take to mix between the two ground states (or, more precisely, to wash out the magnetization), if you are below the critical temperature is suppresed like one over the number of spins. The argument is very old school: one simply looks at a microscopic dynamics where spins are flipped at a rate proprotional to the Boltzman factor coming from flipping the spin. In such a setting the relaxation looks like a random walk of the magnetization on an energy landscape given by the free energy of the system. One then finds that below a cricital temparture the free energy has two minima and when you do an anlysis of the diffusion on this free energy the time it takes to diffuse between the two minima is suppresed by the number of spins. Okay, all is good and well. This is (basically) the reason why if I encode classical information into a cold enough ferromagnetic system (in high enough dimensions) then the time it will take to destroy this information will be huge. Of course, one might also note that the time is huge, but finite: the states are metastable. Note, however, that this finite time means that eventually the information stored in the magnetization will be thermalized (or so close to thermalized that you can’t tell the difference.)
Okay, so much for the oldschool part of the paper. Now onto the meat of the paper. The authors next consider whether similar constructions can be performed for quantum information. Are there quantum systems with metastable states which can be used to store quantum information (a “quantum hard drive”?) Now, of course, given the above argument what one would like to see is an argument that the time it takes to disorder quantum information in a system scales with the size of the system. Unfortunately, the authors do not take this approach but instead go a different route. Their approach is straightforward: they attempt to analyze the mathematical structure of the metastable states. They base their argument on the second law of thermodynamics. One way to phrase the second law of thermodynamics is that which is done by Kelvin: it is impossible to construct an engine which has no effect other than extracting heat from a reservoir and converting this to an equivalent amount of work. Fine. The authors then rephrase this in what is called “passitvity.” A quantum state is passive when it is not possible to extract energy from the system at a given state by means of an engine’s cycle. Now the short of the story about pasivity is that for finite systems, completely passive (meaning that for all n, any n-fold tensor product of the states is passive with respect to n-fold local cyclic evolutions) states are Gibbs states. In other words, for finite systems, there is only one completely passive states: the Gibbs state [tex]$exp[-beta H] /Z$[/tex]. Fine. The authors then discuss the case for infinite systems, but I will ignore this argument (1) because even in the oldschool part of the paper, the argument was not about infinite lifetimes, only about metastable states, (2) I’ve never seen an infinite system, nor, unless I become God, do I think I will ever in the future see an infinite system, (3) even at this stage, I object to their argument, and indeed everything about my objection carries over to the infinite case. Okay, I’ll mention the infinite system arguement, just for completeness, but really I think the above objections should be kept in mind. The infinite system arguement is that one can show that completely passive states in an arbtrary (finite, infinite) system are a simplex. So now the authors argue that since these states are a simplex, i.e. a convex combination of a finite number of states, then these completely passive states (finite or infinite) cannot be used to store quantum information. So you see, even for the finite systems, where there is just one completely passive state, the Gibbs state, the argument will be the same.
So what is my objection? Well notice that the wool has been pulled over your eyes in this argument. In the oldschool section the authors argued that if you try to store information into the CW system, then the amount of time it takes for this information to be destroyed grows with the system size. In other words, if you start a sytem out in the non-Gibbs states of stored information, then it will take a long time to thermalize into the Gibbs state. This is what is important for storing information. Notice in particular that despite the fact that thermodynamic equilibrium is the end result of the dynamics considered, the process described is from a non-equilibrium system to the equilibrium system. In the contradiction section of the paper the authors argue that the Gibbs state is the only state which does not violate the second law (or in the infinite case, states which form a symplex.) But what does this have to do with the process where we attempt to store quantum information in the system and then ask how long it takes to reach the thermal equilbrium where the information is destroyed? As far as I can tell it doesn’t have anything to do with this. In fact, if you think about their argument for a while, you will notice that for finite systems, they are arguing that the only allowable states are the completely passive state of the Gibbs state. In other words if you apply their argument for finite systems to the system described in the old school section, the conclusion is that there is only one state and one state cannot store information, i.e. classical hard drives, if they are finite, are not possible!
So what was the falacy in the argument? It was that the question of whether you can store information (classical or quantum) in a system is one that can be analyzed from equilbrium alone. Certainly the second law, as stated in the Kelvin formulation, holds for states in thermal equilibrium, but we are explicitly not asking about this situation: we are asking about what are the states in moving from a nonequilibrium situation to an equilibrium situation. And we are interested in timescales, nota bout final states.
So what is the final section of their paper about? It is about Kitaev’s toric code model for storing quantum information. This is the section I call the strawman. In particular, as I have argued before (see my paper here or any of the many powerpoints of my talks where I talk about self-correcting quantum computers), it is well known that Kitaev’s model, in eqiulibrium, cannot store quantum information. The reason for this is the same reason as for the one dimensional Ising model. But we don’t care about equilbrium, remember. We care about how long it takes to move to equilibrium. Here is where the energy gap comes into play: Kitaev’s basic argument is that the creation of free thermal anyons will be supressed like the Boltzman factor [tex]$exp[-beta g]$[/tex] where [tex]$g$[/tex] is the gap energy. Thus, if we start the system away from equilibrium with the encoded quantum information, the probability that we will obtain a real thermal excitation is exponentially supressed if you temperature is below the gap. Of course, every single qubit will be interacting with a bath, so the total rate of production of thermal anyons needs to be below one over the size of the system. Of course for infinte systems this can never be fullfilled. But for finite sized systems, this be fullfilled: note that the temperature only needs to be decreased in the logarithm of the system size (which, even for macroscopic systems is rarely more than a factor of a hundred.) In the preprint, the authors basically point out that in the equilibrium state there is a constant fraction of real thermal excitations and these can diffuse around the torus to disorder the system: but again the question is what happens when you start out of equilbrium and how long does it then take to return to equilbrium. So I believe that in this section they are attacking basically a strawman. Of course I have my own personal worries about the toric code model and in particular about its fault tolerant properties. You can read about this in my paper on self-correcting quantum memories.

5 Replies to “Oldschool, Contradiction, and a Strawman”

  1. I think that a counterexample to the claim of Alicki and Horodecki is provided by the four-dimensional version of Kitaev’s toric code (discussed in http://arxiv.org/abs/quant-ph/0110143), which protects quantum information in a manner very analogous to how a two-dimensional ferromagnet protects classical information.
    As in Kitaev’s two-dimensional model, the (time-independent) Hamiltonian is a sum of commuting terms, but in this case the qubits live on 2-cells (i.e. plaquettes) in a four-dimensional cubic lattice. There is a term associated wtih each edge (the tensor product of six X’s acting on all of the 2-cells that meet at that edge) and a term associated with each cube (the tensor product of the six Z’s acting on all of the 2-cells contained in that cube). If the lattice covers a 4-torus, there are 6 encoded qubits, one associated with each of the cohomologically nontrivial 2-surfaces that wrap around the torus.
    Consider how Z errors are controlled in this model (the control of X errors works the same way, with the lattice and the dual lattice interchanged). If Z errors occur on a connected “droplet” of 2-cells, each of the edges on the boundary of the droplet is excited. So the energy is proportional to the length of the boundary of the droplet.
    Now we need to consider the competition between the energy and the entropy of droplets. But (just as in the corresponding analysis done many years ago for the two-dimensional Ising model by Peierls), the right way to think about it is to consider the dynamics of the loops of “string” that bound the droplets. The number of string loops of a specified length grows exponentially with length. Therefore, if the temperature is low enough, long loops of string are exponentially rare, due to the Boltzmann suppression.
    For the memory to fail, a thermal fluctuation must induce a loop to grow so large that it sweeps around the lattice. The time scale for such a thermal fluctuation grows exponentially with the size of the lattice.
    Admittedly, this memory requires four spatial dimensions, so it is not realistic. But spatial dimensionality did not seem be considered in the Alicki-Horodecki argument.
    By the way, in http://arxiv.org/abs/quant-ph/0110143 we considered how to realize this idea by simulating the thermal bath using quantum gates and refreshable ancillas. Charlene Ahn and I have done a much more detailed analysis of this (reported at QIP 2005, see http://www.theory.caltech.edu/~preskill/talks/QIP05_preskill.pdf, but unfortunately not yet written up for publication). But that is not what I am talking about here. Rather, I just want to consider the system described by the four-dimensional version of Kitaev’s Hamilonian in thermal equilibrium.
    This is actually easier to analyze.

  2. To summarize: The difference between the two-dimensional and four-dimensional Kitaev models is much like the difference between the one-dimensional and two-dimensional Ising models. The 1D Ising model, or the 2D Kitaev model, gets disordered at any finite temperature, because the energy cost of creating a point defect is a constant independent of the system size. But to disorder the 2D Ising model, or the 4D Kitaev model, a string loop with length comparable to the linear size of the system must be created by thermal fluctuations, which is exponentially unlikely at sufficiently low nonzero temperature.
    Quantum information *is* harder to stabilize than classical information, but it’s not impossible. In this setting, the price of quantum memory is a higher required spatial dimensionality

  3. Okay, let’s get those gentlemen from Poland to respond. To provoke a response, I have composed an especially vitriolic Haiku. As Bertie Wooster would say, I mean it to sting.
    To dear Alicki
    and also Horodecki
    your paper icky

  4. Is it unfair of me to have a knee-jerk reaction against papers that claim to prove that quantum computation is impossible? It is yet conceivable someone will think of some completely new reason that they are impossible. Even so, papers like this one seem just plain inadequate.
    Is it a matter of style? There was a Bugs Bunny cartoon with Christopher Columbus arguing back and forth with the King of Spain: “Flat!” “Round!” “Flat!” “Round!” I suppose that debates with outright mutual contradictions are useful in politics, or even psychology. I am not used to them in mathematics or CS theory. Are they more accepted in theoretical physics?

  5. I have only briefly looked through Alicki and Horedecki’s paper so far, so I will need to digest it more before commenting on their work. I would like to point out, however, some work towards a “quantum hard drive” from a different perspective than the Hamiltonian approach.
    In Chapter 5 of my thesis, I wrote about joint work with Charlene Ahn on constructing a purely local scheme for maintaining quantum memory in a two-dimensional array. We considered placing classical controllers that regularly perform measurements on nearby qubits (and which are allowed to fail at some given rate).
    Any such finite system will eventually degrade (since there is a nonzero probability of a fatally large cluster of errors occuring), but we showed that provided the individual qubit and controller error rates are below certain values, then the relaxation time for the system grows exponentially with the size of the system. Hence, quantum memory can in principle be locally maintained in two dimensions for very long times with only moderate overhead.
    In connection with John Preskill’s comments, we tried to demonstrate that two-dimensional quantum memory is analagous to one-dimensional classical memory.

Leave a Reply

Your email address will not be published. Required fields are marked *