Mostly Right With a Little Bit of Wrong

Seth’s Lloyd’s book, Programming the Universe : A Quantum Computer Scientist Takes On the Cosmos has been reviewed in the New York Times. Okay, so for my blood presure, I just should not be allowed to read passages like this

Ordinary desktop computers are a flawed model of the physical world, Lloyd argues, because they handle everything as clear “yes” or “no” commands, while the universe operates according to the rules of quantum physics, which inherently produce fuzzy results. But Lloyd happens to be one of the world’s experts in a new kind of computing device, called a quantum computer, which can produce similarly vague answers like “mostly yes but also a little bit no.” …

Ahhhhhhh! Yeah, I should avoid those passages.

Oldschool, Contradiction, and a Strawman

Today, Robert Alicki and Michale Horedecki have an interesting preprint on the arxiv, quant-ph/0603260: “Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems.” (Now where have I seen that word “quantum hard drive” before?) As you can imagine, the paper cuts closely to my own heart, so I thought I’d put some comments on the paper here.
The question the paper addresses is right there in the title: is it possible to store quantum information in a system in a way similar to how a hard drive stores its information in ferromagnetic magnetic domains? The authors clame to show that this is not possible (On a personal note, I cannot read the words “no-go theorem” without thinking of John Bell’s famous quote: “what is proved by impossibility proofs is lack of imagination.” Of course, this quote is also ironic: Bell is most famous for proving the impossibility of using local hidden variable theories to simulate quantum correlations! And of course I in no way think that Alicki or Hordecki suffer from a lack of creativity, that’s for sure!)
The paper is structured into three parts, which I will call oldschool, contradiction, and strawman.
In the oldschool section of the paper the authors discuss why it is possible to store classical information in a ferromagnetic domain. They use the Curie-Weiss (CW) model of a ferromagnet, which is not exactly realistic, but captures (for ferromagnetics of high enough dimension) much of the essential ideas of real ferromagnets. In the Curie-Weiss model, we have classical spins which point either up or down. The total magnetization is then the sum of all of the spins, counting +1 for spin up and -1 for spin down. The energy of a particular configuration in the CW model is given by negative of the quantity of the total magentization squared times some arbitrary coupling constant. This means that in the CW model, each spins is coupled to every other spin via a ferromagnetic Ising interaction. Ferromagnetic Ising interactions produce an energetically favorable condition to alligning the two spins involved in the interaction. Thus the ground state of the CW model is two-fold degenerate and is the all spins pointing up and the all spins pointing down states. Further there is a big barrier of energy for flipping between these two ground states. In their paper Alicki and Horodecki present the old school argument about why information related to the total magnetization is resistent to the effects of interacting with a thermal environment. They provide a heuristic argument which basically says that the time it will take to mix between the two ground states (or, more precisely, to wash out the magnetization), if you are below the critical temperature is suppresed like one over the number of spins. The argument is very old school: one simply looks at a microscopic dynamics where spins are flipped at a rate proprotional to the Boltzman factor coming from flipping the spin. In such a setting the relaxation looks like a random walk of the magnetization on an energy landscape given by the free energy of the system. One then finds that below a cricital temparture the free energy has two minima and when you do an anlysis of the diffusion on this free energy the time it takes to diffuse between the two minima is suppresed by the number of spins. Okay, all is good and well. This is (basically) the reason why if I encode classical information into a cold enough ferromagnetic system (in high enough dimensions) then the time it will take to destroy this information will be huge. Of course, one might also note that the time is huge, but finite: the states are metastable. Note, however, that this finite time means that eventually the information stored in the magnetization will be thermalized (or so close to thermalized that you can’t tell the difference.)
Okay, so much for the oldschool part of the paper. Now onto the meat of the paper. The authors next consider whether similar constructions can be performed for quantum information. Are there quantum systems with metastable states which can be used to store quantum information (a “quantum hard drive”?) Now, of course, given the above argument what one would like to see is an argument that the time it takes to disorder quantum information in a system scales with the size of the system. Unfortunately, the authors do not take this approach but instead go a different route. Their approach is straightforward: they attempt to analyze the mathematical structure of the metastable states. They base their argument on the second law of thermodynamics. One way to phrase the second law of thermodynamics is that which is done by Kelvin: it is impossible to construct an engine which has no effect other than extracting heat from a reservoir and converting this to an equivalent amount of work. Fine. The authors then rephrase this in what is called “passitvity.” A quantum state is passive when it is not possible to extract energy from the system at a given state by means of an engine’s cycle. Now the short of the story about pasivity is that for finite systems, completely passive (meaning that for all n, any n-fold tensor product of the states is passive with respect to n-fold local cyclic evolutions) states are Gibbs states. In other words, for finite systems, there is only one completely passive states: the Gibbs state [tex]$exp[-beta H] /Z$[/tex]. Fine. The authors then discuss the case for infinite systems, but I will ignore this argument (1) because even in the oldschool part of the paper, the argument was not about infinite lifetimes, only about metastable states, (2) I’ve never seen an infinite system, nor, unless I become God, do I think I will ever in the future see an infinite system, (3) even at this stage, I object to their argument, and indeed everything about my objection carries over to the infinite case. Okay, I’ll mention the infinite system arguement, just for completeness, but really I think the above objections should be kept in mind. The infinite system arguement is that one can show that completely passive states in an arbtrary (finite, infinite) system are a simplex. So now the authors argue that since these states are a simplex, i.e. a convex combination of a finite number of states, then these completely passive states (finite or infinite) cannot be used to store quantum information. So you see, even for the finite systems, where there is just one completely passive state, the Gibbs state, the argument will be the same.
So what is my objection? Well notice that the wool has been pulled over your eyes in this argument. In the oldschool section the authors argued that if you try to store information into the CW system, then the amount of time it takes for this information to be destroyed grows with the system size. In other words, if you start a sytem out in the non-Gibbs states of stored information, then it will take a long time to thermalize into the Gibbs state. This is what is important for storing information. Notice in particular that despite the fact that thermodynamic equilibrium is the end result of the dynamics considered, the process described is from a non-equilibrium system to the equilibrium system. In the contradiction section of the paper the authors argue that the Gibbs state is the only state which does not violate the second law (or in the infinite case, states which form a symplex.) But what does this have to do with the process where we attempt to store quantum information in the system and then ask how long it takes to reach the thermal equilbrium where the information is destroyed? As far as I can tell it doesn’t have anything to do with this. In fact, if you think about their argument for a while, you will notice that for finite systems, they are arguing that the only allowable states are the completely passive state of the Gibbs state. In other words if you apply their argument for finite systems to the system described in the old school section, the conclusion is that there is only one state and one state cannot store information, i.e. classical hard drives, if they are finite, are not possible!
So what was the falacy in the argument? It was that the question of whether you can store information (classical or quantum) in a system is one that can be analyzed from equilbrium alone. Certainly the second law, as stated in the Kelvin formulation, holds for states in thermal equilibrium, but we are explicitly not asking about this situation: we are asking about what are the states in moving from a nonequilibrium situation to an equilibrium situation. And we are interested in timescales, nota bout final states.
So what is the final section of their paper about? It is about Kitaev’s toric code model for storing quantum information. This is the section I call the strawman. In particular, as I have argued before (see my paper here or any of the many powerpoints of my talks where I talk about self-correcting quantum computers), it is well known that Kitaev’s model, in eqiulibrium, cannot store quantum information. The reason for this is the same reason as for the one dimensional Ising model. But we don’t care about equilbrium, remember. We care about how long it takes to move to equilibrium. Here is where the energy gap comes into play: Kitaev’s basic argument is that the creation of free thermal anyons will be supressed like the Boltzman factor [tex]$exp[-beta g]$[/tex] where [tex]$g$[/tex] is the gap energy. Thus, if we start the system away from equilibrium with the encoded quantum information, the probability that we will obtain a real thermal excitation is exponentially supressed if you temperature is below the gap. Of course, every single qubit will be interacting with a bath, so the total rate of production of thermal anyons needs to be below one over the size of the system. Of course for infinte systems this can never be fullfilled. But for finite sized systems, this be fullfilled: note that the temperature only needs to be decreased in the logarithm of the system size (which, even for macroscopic systems is rarely more than a factor of a hundred.) In the preprint, the authors basically point out that in the equilibrium state there is a constant fraction of real thermal excitations and these can diffuse around the torus to disorder the system: but again the question is what happens when you start out of equilbrium and how long does it then take to return to equilbrium. So I believe that in this section they are attacking basically a strawman. Of course I have my own personal worries about the toric code model and in particular about its fault tolerant properties. You can read about this in my paper on self-correcting quantum memories.

Quantum Key Distribution: "Ready for Prime Time"

What should we make of this press release from MagiQ Technologies? The press release cites two advances:

The first breakthrough demonstrates there are no distance limitations to MagiQ’s quantum cryptography solution. Utilizing Verizon’s commercial fiber infrastructure, MagiQ successfully bridged two separate spans of 80km by cascading a number of MagiQ’s QPN 7505 devices. The commercial fiber network is made up of spans, typically 80km, linked together to complete metro area networks and long haul networks. Cascading of quantum cryptography devices enables the deployment of quantum cryptography throughout the telecommunications network.

and

The second breakthrough demonstrates that quantum keys can be mixed with other optical signals on one strand of fiber. Previously, quantum cryptography devices required a dedicated dark fiber strand, which is costly and not always available, for the transmission of quantum keys. The multiplexing of quantum
keys with data on one strand significantly reduces the cost of deploying quantum cryptography.

The first item sounds like they are performing some sort of key infrastructure and not quantum repeaters, right? I wonder how many of these MagicQ systems are actually in use right now? Ah the world of cryptography, where every sentence I write is a question?

The Cosmic Computer

Seth Lloyd has a new book out Programming the Universe : A Quantum Computer Scientist Takes On the Cosmos . I just picked up a copy and flipped to this amusing anecdote:

…When Shannon showed his new formula for information to the mathematician John von Neumann and asked him what the quantity he had just defined should be called, von Neumann is said to have replied “H.”
“Why H?” asked Shannon.
“Because that’s what Boltzmann called it,” said von Neumann…

That crazy von Neumann.

Stangest Quantum Computing Book?

Today I was in the University of Washington bookstore and notice an oversized book in the physics section. The top of the book read “Quantum Computing,” so I pulled it out. The subtitle, however, was a bit of a surprise: “The Vedic Fabric of the Digital Universe.” Um. Who is this book for? “This book is for anyone who needs to understand the one-to-one correspondence between quantum computing and the ancient Vedic Literature.” Which makes me wonder if there is a many-to-one correspondence between quantum computing and other religious/philosophical texts? What do I think of the book? Well, this book has some of the most amazing diagrams I have ever seen, or think I will ever see, in a quantum computing book. For a taste you might visit www.vediccomputing.com. That’s what I think of this book.

APS TGQI Best Student Paper Awards

Congrats to the two winners of the first Best Student Paper Awards for the APS Topical Group on Quantum Information, Concepts and Computation: Michael Garrett (Calgary) and Chris Langer (NIST, Boulder) (What you’ve already seen this announcement, congrats! What you’ve not seen this announcement? Must be because your not a member of the topical group. Maybe you should join? Then again who am I to say what you should do!) The awards are sponsored by the Perimeter Institute (theory) and the Institute for Quantum Computing (experiment) and come with a $500 award, but more importantly with fabulous fame! (Hey no jokes about the “American Physical Society” awards in quantum computing both being sponsered by Canadian institutions.)
Here is the abstract from Michael Garrett’s talk:

9:36AM U40.00007 Stochastic One-Way Quantum Computing with Ultracold Atoms in Optical
Lattices , MICHAEL C. GARRETT, University of Calgary, DAVID L. FEDER, University of Calgary — The one-way model of quantum computation has the advantage over conventional approaches of allowing all entanglement to be prepared in a single initial step prior to any logical operations, generating the so-called cluster state. One of the most promising experimental approaches to the formation of such a highly entangled resource employs a gas of ultracold atoms confined in an optical lattice. Starting with a Mott insulator state of pseudospin-1/2 bosons at unit filling, an Ising-type interaction can be induced by allowing weak nearest-neighbor tunneling, resulting in the formation of a cluster state. An alternate approach is to prepare each spin state in its own sublattice, and induce collisional phase shifts by varying the laser polarizations. In either case, however, there is a systematic phase error which is likely to arise, resulting in the formation of imperfect cluster states. We will present various approaches to one-way quantum computation using imperfect cluster states, and show that the algorithms are necessarily stochastic if the error syndrome is not known.

and here is the abstract from Chris Langer’s talk

8:48AM U40.00003 Robust quantum memory using magnetic-field-independent atomic qubits1, C. LANGER, R. OZERI, J. D. JOST, B. DEMARCO2, A. BEN-KISH3, B. BLAKESTAD, J. BRITTON, J. CHIAVERINI, D. B. HUME, W. M. ITANO, D. LEIBFRIED, R. REICHLE, T. ROSENBAND, P. SCHMIDT, D. J. WINELAND — Scalable quantum information processing requires physical systems capable of reliably storing coherent superpositions for times over which quantum error correction can be implemented. We experimentally demonstrate a robust quantum memory using a magnetic-field-independent hyperfine transition in 9Be+ atomic ion qubits at a field B = 0.01194 T. Qubit superpositions are created and analyzed with two-photon stimulated-Raman transitions. We observe the single physical qubit memory coherence time to be greater than 10 seconds, an improvement of approximately five orders of magnitude from previous experiments. The probability of memory error for this qubit during the measurement period (the longest timescale in our system) is approximately 1.4 × 10−5 which is below fault-tolerance threshold for common quantum error correcting codes.

Bits, Bits, Wherefore Art Thou Bits?

Black holes evaporate via the process of Hawking radiation. If we take the initial pure state describing a body which will collapse and form a black hole which can evaporate, then it would appear that this pure state will evolve, after the complete evaporation, to a state which is mixed. Since it is doing this without leaving any remnant for quantum information to hide in, this would seem to violate the unitarity of quantum theory, which does not allow a closed system pure state to evolve to a closed system mixed state. Similarly, if we consider the an evaporating black hole, there are spacelike surfaces which contain the collapsing body and the outgoing Hawking radiation: therefore if we somehow believe that the outgoing Hawking radiation contains quantum information which we have thrown into the black hole, this process must have cloned this information and thus violated the linearity of quantum theory. This reasoning has been hounding theoretical physicists for ages and is known as the black hole information paradox.
A few years ago, Horowitz and Maldacena proposed a solution to this problem (see hep-th/0310281, “The black hole final state”.) The basic idea of their solution is to use boudary conditions at the black hole singularity to fix this problem. Actually the basic idea of their solution is to use post-selected quantum teleportation. How does this work? Well consider the initial pure state describing the collapsing matter. In addition to this collapsing matter, there will be the Unruh state of the infalling and outgoing Hawking radiation. Now it works out that this Unruh state is basically a maximally entangled quantum state. So what Horowitz and Maldacena propose is that, in order to get the pure state describing the collapsing matter “out” of the black hole, one need only “teleport” this information by using the Unruh radiation. Indeed, we can consider such teleportation: perform a measurement of the pure state describing the collapsing matter and the state of the infalling Hawking radiation in the appropriate (generalized) Bell basis. Now, of course, in order to complete the teleportation procedure we need to send the result of this measurement to the outgoing radiation and apply the appropriate unitary rotation to get the pure state of the collapsing matter outside of the black hole. But, of course, we can’t do this: we can’t send the classical information from inside the black hole to outside. So what Horowitz and Maldacena propose is that instead of performing the full teleportation, a particular result is post-selected for this teleportation. And this result will be the one where you do nothing in the teleportatin protocol. In other words, they postulate that the black-hole singularity acts like a measurement in which one always gets a particular outcome. This is the “final state” projection postulate.
This is a very nice way to try to avoid the black-hole information paradox. One reason is that it seems to put all the craziness at the black-hole singularity, and not of the craziness elsewhere, where we would expect our current ideas about physics to be rather robustly true. But does this really solve the problem? Gottesman and Preskill, in hep-th/0311269 argue that it does not. What Gottesman and Preskill point out is that the collapsing matter and the infalling Unruh radiation will, in general, interact with each other. In this case, the problem is that this interaction can cause the information to no longer be faithfully reproduced outside of the black hole. The problem is that this interaction causes post-selection onto a state which is no longer maximally entangled: the state will then fail at producing the appropriate teleported copy of the state outside of the black hole. Sometimes, in fact, this interaction can completely destroy the effect Horrowitz and Maldacena are attempting to achieve (if we disentangle the post-selected state.) This does not bode well for this post-selected answer to the black-hole information paradox.
So is this the end? Well it is never the end! One might try to show that the amount of information destroyed in the Gottesman-Preskill scheme is small. Now this, in some sense, would be comforting: who would notice a single qubit missing in a world so large? On the other hand, while this might be comforting, it would certainly cause certain consternation among those of us who would like an explanation which is satisfying without giving up even an ounce of information. This question, of the amount of informatin lost, is addressed, in part, in the recent paper “Almost Certain Escape from Black Holes in Final State Projection Models” by Seth Lloyd, Physical Review Letters, 061302 (2006)
Consider the Horowitz and Maldacena scheme as modified by Gottesman and Preskill with an interaction between the infalling and collapsing matter state described by U. Lloyd calculates the fidelity of this scheme, when averaged over all unitaries U according to the Harr measure (this fidelity is the overlap between the initial pure collapsing matter state and the outgoing state after the Horowitz-Maldacena post-selection.) Lloyd finds that this fidelity is 0.85…. In the paper, Seth argues that this is indeed large enough as to indicate that most of the information survives. So, I ask, is 0.85… fidelity really satisfying? I would argue that it is not, even if I accept that averaging over the unitaries makes any sense at all. Why? Well suppose that you daisy chain this procedure. Then it seems that your average fidelity could be as small as desired. Thus while you might argue that there is only a small loss of unitarity for random unitaries, there are physical processes for which this loss of unitarity is huge. This seems to be one of the lessons of quantum theory: destroying only a little bit of information is hard to do without bringing down the whole wagon. Daniel Gottesman says, I think, exactly this in this New Scientist article. Even Lloyd, at the end of his article, hedges his bets a bit:

Final-state projection will have to await experimental and theoretical confirmation before black holes can
be used as quantum computers. It would be premature to jump into a black hole just now.

What about you? Would you jump into a black hole with the fidelity that we could put you back together of 0.85… on average?

Are You Sure You See?

I’m visiting the KITP in Santa Barbara because they are having a term long workshop on Topological Phases and Quantum Computation (directed by Sander Bais, Chetan Nayak, and John Preskill.) Unfortunately I won’t be able to stay for the entire workshop. But this isn’t as huge of a blow as it would have been years ago, because, the KITP records all of the talks and puts them online. They can be found here.
Yesterday, T. Sentil gave a talk in which he almost made it through his introduction! Anyway one point which he emphasized is something I have always found fascinating. And that is that topological phases (a loose term which I won’t even try to define) might actually be much more common than is widely thought, but the problem is that we don’t have the tools to see them! From the perspective of quantum error correcting codes, this seems fairly natural. In a quantum error correcting code, local probes without at least a global analysis, can not reveal the information encoded into a quantum error correcting code. Indeed if local probes without global analysis could get at our quantum data, then so too could an environment get to this data. Another way to state this is to say that the order parameter for quantum error correcting codes is a big complicated beast whose measurement cannot be done with the simple tools we currently have in our lab.
As a concrete example of this, consider single-molecule molecules. These are crazy molecules which can achieve permanent magnetization through a single molecule effect (usually at fairly low temperatures.) Basically these molecule have a high spin ground state and a high zero-field splitting such that there is a substantial barrier to leaving the high spin ground state. This effect is the result of a collection of interactions among the lower spin systems. What is interesting is the following observation. Take four spin one-half systems and couple them together anti-ferromagnetically with equal couplings between all four systems (mmm, tetrahedrons.) Such a system will have a spin zero ground state which is two fold degenerate. And this two-fold degenerate ground state is actually a single error quantum error detecting code (see quant-ph/0012018)! But why in the world, if you were a chemist, would you go looking for a spin zero molecule. This is the exact opposite of what you would like to find. Further, you won’t be able to see that there is a quantum error detecting code without a knob which allows you to split the degeneracy of this ground state. And doing this is not a trivial task. In short, exactly since we don’t have the tools to observe this effect, we won’t really be interested in it. You need a crazy theorist to tell you that maybe you should think about trying to engineer such a knob onto your system. What use is a crazy theorist except to tell you crazy stories.
Of course, thinking like this, that there might be hidden orders which we are not smart enough to discover is a good way to make yourself paranoid. What might we be hiding from because our glasses are so opaque? Certainly the role of instrumentation in science is one I find fascinating and, I must admit, a bit scary.

Dance Ions! Dance!

A reliable source (heh, that’s funny isn’t it) tells me that the ion trap dance I discussed here was the “Open Box Salsa.” Because, you know, I’m sure you all really wanted to know that.
(Which of course reminds of a story. So an old teaching technique for those who have trouble spelling (like me) is to learn to visualize the world you are trying to spell while looking, say, to the up and right. You know, that introspective thing you do when you are searching for some bit of knowledge. The idea is to leverage that to help your spelling. Well a few years back I decided that what was more important than remembering things was to forget things. Indeed, like the above information, I’ll bet you were better off without it. So I started training myself to look down when I am trying to forget something. Not sure if it ever really worked. Or at least I don’t remember if it’s worked.)

What Are the Assumptions of Classical Fault-Tolerance?

Oftentimes when we encounter the threshold theorem for quantum computation, it is pointed out that there are two assumptions which are necessary for fault-tolerant quantum computation. These two assumptions are

1. A supply of refreshable pure or nearly pure ancillas into which the entropy generated by quantum errors can be dumped throughout the error correction procedure.
2. Parallel operations.

For point 1, an important reference is Aharonov, Ben-Or, Impagliazzo and Nisan, quant-ph/9611028, and for point 2, an important reference is Aharonov and Ben-Or, quant-ph/9611029. These two assumptions are provably necessary. Usually when I see these two points presented, they are presented as if they are something unique to quantum computation. But is this really true? Well certainly not for the first case. In fact quant-ph/9611028 is first about computation in a noisy model of classical reversible computation and second about a noisy model of quantum computation! What about point 2? Well I can’t find a reference showing this, but it seems that the arguments presented in quant-ph/9611029 can be extended to reversible noisy classical computation.
Of course many of you will, by now, be grumbling that I haven’t included the other assumptions usually stated for the threshold theorem for fault-tolerant quantum computation. Okay, so I’ll put it in:

3. A noisy model in relation to control of your system which is not too ridiculous.

Well, okay, usually it is not stated this way, but that’s because I don’t have an easy way to encapsulate the noise models which lead to provable lack of ability to compute fault tolerantly. But certainly there are similar noise constaints for classical computation which are provably necessary for fault-tolerant classical computation.
So what are the difference between the assumptions for fault-tolerant quantum and fault-tolerant classical computation? I strongly believe that the only differences here are difference arising simply going from a theory of probabilities in classical computation to a theory of amplitudes in quantum computation. Of course this short changes the sweet and tears which is necessary to build the theory of fault-tolerant quantum computation, and I don’t want to disparage this in any way. I just want to suggest that the more we learn about the theory of fault-tolerant quantum computation, the more we recognize its similarity to probablistic classical computation. I call this general view of the world the “principal of the ubiquitous factor of two in quantum computing.” The idea being mostly that quantum theory differs from probablistic theory only in the necessity to deal with phase as well as amplitude errors, or more generally, to deal with going from probabilities to amplitudes. This point of view is certainly not even heuristic, but it seems at least that this is the direction we are heading towards.
Of course the above view is speculative, not rigorous, and open to grand debate over beers or the within the battleground of an academic seminar. But it certainly is one of the reasons why I’m optimistic about quantum computing, and why, when I talk to a researcher in higher dimensional black holes (for example) and they express pessimism, I find it hard to put myself in their shoes. To summarize that last sentence, I’m betting we have quantum computers before higher dimensional black holes are discovered 🙂