Mostly Right With a Little Bit of Wrong

Seth’s Lloyd’s book, Programming the Universe : A Quantum Computer Scientist Takes On the Cosmos has been reviewed in the New York Times. Okay, so for my blood presure, I just should not be allowed to read passages like this

Ordinary desktop computers are a flawed model of the physical world, Lloyd argues, because they handle everything as clear “yes” or “no” commands, while the universe operates according to the rules of quantum physics, which inherently produce fuzzy results. But Lloyd happens to be one of the world’s experts in a new kind of computing device, called a quantum computer, which can produce similarly vague answers like “mostly yes but also a little bit no.” …

Ahhhhhhh! Yeah, I should avoid those passages.

Oldschool, Contradiction, and a Strawman

Today, Robert Alicki and Michale Horedecki have an interesting preprint on the arxiv, quant-ph/0603260: “Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems.” (Now where have I seen that word “quantum hard drive” before?) As you can imagine, the paper cuts closely to my own heart, so I thought I’d put some comments on the paper here.
The question the paper addresses is right there in the title: is it possible to store quantum information in a system in a way similar to how a hard drive stores its information in ferromagnetic magnetic domains? The authors clame to show that this is not possible (On a personal note, I cannot read the words “no-go theorem” without thinking of John Bell’s famous quote: “what is proved by impossibility proofs is lack of imagination.” Of course, this quote is also ironic: Bell is most famous for proving the impossibility of using local hidden variable theories to simulate quantum correlations! And of course I in no way think that Alicki or Hordecki suffer from a lack of creativity, that’s for sure!)
The paper is structured into three parts, which I will call oldschool, contradiction, and strawman.
In the oldschool section of the paper the authors discuss why it is possible to store classical information in a ferromagnetic domain. They use the Curie-Weiss (CW) model of a ferromagnet, which is not exactly realistic, but captures (for ferromagnetics of high enough dimension) much of the essential ideas of real ferromagnets. In the Curie-Weiss model, we have classical spins which point either up or down. The total magnetization is then the sum of all of the spins, counting +1 for spin up and -1 for spin down. The energy of a particular configuration in the CW model is given by negative of the quantity of the total magentization squared times some arbitrary coupling constant. This means that in the CW model, each spins is coupled to every other spin via a ferromagnetic Ising interaction. Ferromagnetic Ising interactions produce an energetically favorable condition to alligning the two spins involved in the interaction. Thus the ground state of the CW model is two-fold degenerate and is the all spins pointing up and the all spins pointing down states. Further there is a big barrier of energy for flipping between these two ground states. In their paper Alicki and Horodecki present the old school argument about why information related to the total magnetization is resistent to the effects of interacting with a thermal environment. They provide a heuristic argument which basically says that the time it will take to mix between the two ground states (or, more precisely, to wash out the magnetization), if you are below the critical temperature is suppresed like one over the number of spins. The argument is very old school: one simply looks at a microscopic dynamics where spins are flipped at a rate proprotional to the Boltzman factor coming from flipping the spin. In such a setting the relaxation looks like a random walk of the magnetization on an energy landscape given by the free energy of the system. One then finds that below a cricital temparture the free energy has two minima and when you do an anlysis of the diffusion on this free energy the time it takes to diffuse between the two minima is suppresed by the number of spins. Okay, all is good and well. This is (basically) the reason why if I encode classical information into a cold enough ferromagnetic system (in high enough dimensions) then the time it will take to destroy this information will be huge. Of course, one might also note that the time is huge, but finite: the states are metastable. Note, however, that this finite time means that eventually the information stored in the magnetization will be thermalized (or so close to thermalized that you can’t tell the difference.)
Okay, so much for the oldschool part of the paper. Now onto the meat of the paper. The authors next consider whether similar constructions can be performed for quantum information. Are there quantum systems with metastable states which can be used to store quantum information (a “quantum hard drive”?) Now, of course, given the above argument what one would like to see is an argument that the time it takes to disorder quantum information in a system scales with the size of the system. Unfortunately, the authors do not take this approach but instead go a different route. Their approach is straightforward: they attempt to analyze the mathematical structure of the metastable states. They base their argument on the second law of thermodynamics. One way to phrase the second law of thermodynamics is that which is done by Kelvin: it is impossible to construct an engine which has no effect other than extracting heat from a reservoir and converting this to an equivalent amount of work. Fine. The authors then rephrase this in what is called “passitvity.” A quantum state is passive when it is not possible to extract energy from the system at a given state by means of an engine’s cycle. Now the short of the story about pasivity is that for finite systems, completely passive (meaning that for all n, any n-fold tensor product of the states is passive with respect to n-fold local cyclic evolutions) states are Gibbs states. In other words, for finite systems, there is only one completely passive states: the Gibbs state [tex]$exp[-beta H] /Z$[/tex]. Fine. The authors then discuss the case for infinite systems, but I will ignore this argument (1) because even in the oldschool part of the paper, the argument was not about infinite lifetimes, only about metastable states, (2) I’ve never seen an infinite system, nor, unless I become God, do I think I will ever in the future see an infinite system, (3) even at this stage, I object to their argument, and indeed everything about my objection carries over to the infinite case. Okay, I’ll mention the infinite system arguement, just for completeness, but really I think the above objections should be kept in mind. The infinite system arguement is that one can show that completely passive states in an arbtrary (finite, infinite) system are a simplex. So now the authors argue that since these states are a simplex, i.e. a convex combination of a finite number of states, then these completely passive states (finite or infinite) cannot be used to store quantum information. So you see, even for the finite systems, where there is just one completely passive state, the Gibbs state, the argument will be the same.
So what is my objection? Well notice that the wool has been pulled over your eyes in this argument. In the oldschool section the authors argued that if you try to store information into the CW system, then the amount of time it takes for this information to be destroyed grows with the system size. In other words, if you start a sytem out in the non-Gibbs states of stored information, then it will take a long time to thermalize into the Gibbs state. This is what is important for storing information. Notice in particular that despite the fact that thermodynamic equilibrium is the end result of the dynamics considered, the process described is from a non-equilibrium system to the equilibrium system. In the contradiction section of the paper the authors argue that the Gibbs state is the only state which does not violate the second law (or in the infinite case, states which form a symplex.) But what does this have to do with the process where we attempt to store quantum information in the system and then ask how long it takes to reach the thermal equilbrium where the information is destroyed? As far as I can tell it doesn’t have anything to do with this. In fact, if you think about their argument for a while, you will notice that for finite systems, they are arguing that the only allowable states are the completely passive state of the Gibbs state. In other words if you apply their argument for finite systems to the system described in the old school section, the conclusion is that there is only one state and one state cannot store information, i.e. classical hard drives, if they are finite, are not possible!
So what was the falacy in the argument? It was that the question of whether you can store information (classical or quantum) in a system is one that can be analyzed from equilbrium alone. Certainly the second law, as stated in the Kelvin formulation, holds for states in thermal equilibrium, but we are explicitly not asking about this situation: we are asking about what are the states in moving from a nonequilibrium situation to an equilibrium situation. And we are interested in timescales, nota bout final states.
So what is the final section of their paper about? It is about Kitaev’s toric code model for storing quantum information. This is the section I call the strawman. In particular, as I have argued before (see my paper here or any of the many powerpoints of my talks where I talk about self-correcting quantum computers), it is well known that Kitaev’s model, in eqiulibrium, cannot store quantum information. The reason for this is the same reason as for the one dimensional Ising model. But we don’t care about equilbrium, remember. We care about how long it takes to move to equilibrium. Here is where the energy gap comes into play: Kitaev’s basic argument is that the creation of free thermal anyons will be supressed like the Boltzman factor [tex]$exp[-beta g]$[/tex] where [tex]$g$[/tex] is the gap energy. Thus, if we start the system away from equilibrium with the encoded quantum information, the probability that we will obtain a real thermal excitation is exponentially supressed if you temperature is below the gap. Of course, every single qubit will be interacting with a bath, so the total rate of production of thermal anyons needs to be below one over the size of the system. Of course for infinte systems this can never be fullfilled. But for finite sized systems, this be fullfilled: note that the temperature only needs to be decreased in the logarithm of the system size (which, even for macroscopic systems is rarely more than a factor of a hundred.) In the preprint, the authors basically point out that in the equilibrium state there is a constant fraction of real thermal excitations and these can diffuse around the torus to disorder the system: but again the question is what happens when you start out of equilbrium and how long does it then take to return to equilbrium. So I believe that in this section they are attacking basically a strawman. Of course I have my own personal worries about the toric code model and in particular about its fault tolerant properties. You can read about this in my paper on self-correcting quantum memories.

Quantum Key Distribution: "Ready for Prime Time"

What should we make of this press release from MagiQ Technologies? The press release cites two advances:

The first breakthrough demonstrates there are no distance limitations to MagiQ’s quantum cryptography solution. Utilizing Verizon’s commercial fiber infrastructure, MagiQ successfully bridged two separate spans of 80km by cascading a number of MagiQ’s QPN 7505 devices. The commercial fiber network is made up of spans, typically 80km, linked together to complete metro area networks and long haul networks. Cascading of quantum cryptography devices enables the deployment of quantum cryptography throughout the telecommunications network.

and

The second breakthrough demonstrates that quantum keys can be mixed with other optical signals on one strand of fiber. Previously, quantum cryptography devices required a dedicated dark fiber strand, which is costly and not always available, for the transmission of quantum keys. The multiplexing of quantum
keys with data on one strand significantly reduces the cost of deploying quantum cryptography.

The first item sounds like they are performing some sort of key infrastructure and not quantum repeaters, right? I wonder how many of these MagicQ systems are actually in use right now? Ah the world of cryptography, where every sentence I write is a question?

Gravitomagnetic London Moment?

Two papers (experiment: gr-qc/0603033, theory: gr-qc/0603032), an ESA press release, and blog posts (Uncertain Principles, Something Similar, and Illuminating Science) today are all about a recent experiment performed by Martin Tajmar (ARC Seibersdorf Research GmbH, Austria) and Clovis de Matos (ESA-HQ, Paris) which they claim shows a gravitomagnetic effect in a laboratory experiment. Not only do they claim that they observe a gravitomagnetic effect, but that the effect comes not from the standard general relativity mechanism, but instead from their own theory which has massive photons and massive graviphotons (which is what the authors call the carriers of the force which arises when one linearlizes gravity and obtains equations which resemble Maxwell equations, i.e. spin 1 instead of spin 2)! Color me skeptical.
The experiment is actually fairly straightforward. Just take a superconducting ring and spin it up. The authors then look for a gravitomagnetic field which can be measured by nearby accelerometers. Now the normal gravitomagnetic field strength measured in their experiment is about 30 orders of magnitude lower than what standard general relativity predicts. But when they run this experiment they indeed do find accelerations in their accelerometers for superconducting rings (and none for non-superconducting rings.) The authors then intepret this effect as confirming evidence of their theory which invokes “gravitophotonic” masses. If this is correct, then this is an astounding result: not only does it detect a gravitomagnetic field, but it also is a field which is not arising from standard general relativity. Wowzer.
Of course you can color me skeptical. As Chad Orzel points out, the signal they are talking about is only about 3 times as strong as their noise. Now when you look at one of their runs, i.e. figure 4 of gr-qc/0603033, the peaks look pretty good, no? Well figure 4b is a little strange: the gravitomagnetic effect appears to occur before the acceleration. Okay a bit strange, but a single run proves nothing, right? Okay, what about figure 5? Ignore the temperature dependence now, but would you have picked out the peaks that they picked out? Okay so these things make me a little uneasy. Okay, so well certainly they did a lot of runs and tried to get some statistics on the effect. Indeed, they did something like this. This is figure 6. And this is what makes the paper frustrating: “Many measurements were conducted over a period from June to November 2005 to show the reproducibility of the results. Fig. 6 summarizes nearly 200 peaks of in-ring and above-ring tangential accelerations measured by the sensor and angular acceleration applied to the superconductors as identified e.g. in Fig 4 with both electric and air motor.” Why is this frustrating? Well because I have no clue how they analyzed their runs and obtained the tangential accelerations. Were the peaks extacted by hand? (And what is the angular acceleration? Is it the average acceleration?) Argh, I can’t tell from the paper.
Well certainly this is a very interesting set of papers (the theory paper is pretty crazy: if I get a chance I’ll try to go through it in some detail and post on it.) I would personally love to know more about the experiment. I spent this afternoon pondering if I could think up a way in which they get the effect they do, but I’m certainly no expert on this stuff, and might be missing something totally obvious.
(Note also that there is another claim of interesting gravitational effects around superconductors made by Podkletnov. For a history of this “anti-gravity” effect see Wikipedia. Since the Podkletnov experiment is considered controversial, the authors of the above article make sure to point out that they are not observing the effect Podkletnov claimed.)

Praying to Entangled Gods

From the Washington Post, in a fair and balanced (*ahem*) article on the effect of prayer on healing:

But supporters say that much about medicine remains murky or is explained only over time. They say, for example, that it was relatively recently that scientists figured out how aspirin works, although it has been in use for centuries.
“Yesterday’s science fiction often becomes tomorrow’s science,” said John A. Astin of the California Pacific Medical Center.
Proponents often cite a phenomenon from quantum physics, in which distant particles can affect each other’s behavior in mysterious ways.
“When quantum physics was emerging, Einstein wrote about spooky interactions between particles at a distance,” Krucoff said. “That’s at least one very theoretical model that might support notions of distant prayer or distant healing.”

Well yeah, it might support the notions of distant prayer or distant healing, except that it explicitly doesn’t support those notions since entanglement can’t be used to signal and hence can’t be used to influence distant objects in the way distant prayer or distant healing would. Argh!

The Cosmic Computer

Seth Lloyd has a new book out Programming the Universe : A Quantum Computer Scientist Takes On the Cosmos . I just picked up a copy and flipped to this amusing anecdote:

…When Shannon showed his new formula for information to the mathematician John von Neumann and asked him what the quantity he had just defined should be called, von Neumann is said to have replied “H.”
“Why H?” asked Shannon.
“Because that’s what Boltzmann called it,” said von Neumann…

That crazy von Neumann.

Stangest Quantum Computing Book?

Today I was in the University of Washington bookstore and notice an oversized book in the physics section. The top of the book read “Quantum Computing,” so I pulled it out. The subtitle, however, was a bit of a surprise: “The Vedic Fabric of the Digital Universe.” Um. Who is this book for? “This book is for anyone who needs to understand the one-to-one correspondence between quantum computing and the ancient Vedic Literature.” Which makes me wonder if there is a many-to-one correspondence between quantum computing and other religious/philosophical texts? What do I think of the book? Well, this book has some of the most amazing diagrams I have ever seen, or think I will ever see, in a quantum computing book. For a taste you might visit www.vediccomputing.com. That’s what I think of this book.

APS TGQI Best Student Paper Awards

Congrats to the two winners of the first Best Student Paper Awards for the APS Topical Group on Quantum Information, Concepts and Computation: Michael Garrett (Calgary) and Chris Langer (NIST, Boulder) (What you’ve already seen this announcement, congrats! What you’ve not seen this announcement? Must be because your not a member of the topical group. Maybe you should join? Then again who am I to say what you should do!) The awards are sponsored by the Perimeter Institute (theory) and the Institute for Quantum Computing (experiment) and come with a $500 award, but more importantly with fabulous fame! (Hey no jokes about the “American Physical Society” awards in quantum computing both being sponsered by Canadian institutions.)
Here is the abstract from Michael Garrett’s talk:

9:36AM U40.00007 Stochastic One-Way Quantum Computing with Ultracold Atoms in Optical
Lattices , MICHAEL C. GARRETT, University of Calgary, DAVID L. FEDER, University of Calgary — The one-way model of quantum computation has the advantage over conventional approaches of allowing all entanglement to be prepared in a single initial step prior to any logical operations, generating the so-called cluster state. One of the most promising experimental approaches to the formation of such a highly entangled resource employs a gas of ultracold atoms confined in an optical lattice. Starting with a Mott insulator state of pseudospin-1/2 bosons at unit filling, an Ising-type interaction can be induced by allowing weak nearest-neighbor tunneling, resulting in the formation of a cluster state. An alternate approach is to prepare each spin state in its own sublattice, and induce collisional phase shifts by varying the laser polarizations. In either case, however, there is a systematic phase error which is likely to arise, resulting in the formation of imperfect cluster states. We will present various approaches to one-way quantum computation using imperfect cluster states, and show that the algorithms are necessarily stochastic if the error syndrome is not known.

and here is the abstract from Chris Langer’s talk

8:48AM U40.00003 Robust quantum memory using magnetic-field-independent atomic qubits1, C. LANGER, R. OZERI, J. D. JOST, B. DEMARCO2, A. BEN-KISH3, B. BLAKESTAD, J. BRITTON, J. CHIAVERINI, D. B. HUME, W. M. ITANO, D. LEIBFRIED, R. REICHLE, T. ROSENBAND, P. SCHMIDT, D. J. WINELAND — Scalable quantum information processing requires physical systems capable of reliably storing coherent superpositions for times over which quantum error correction can be implemented. We experimentally demonstrate a robust quantum memory using a magnetic-field-independent hyperfine transition in 9Be+ atomic ion qubits at a field B = 0.01194 T. Qubit superpositions are created and analyzed with two-photon stimulated-Raman transitions. We observe the single physical qubit memory coherence time to be greater than 10 seconds, an improvement of approximately five orders of magnitude from previous experiments. The probability of memory error for this qubit during the measurement period (the longest timescale in our system) is approximately 1.4 × 10−5 which is below fault-tolerance threshold for common quantum error correcting codes.

What Men Are Poets Who Can Speak of Jupiter…

Daneil Dennet’s new book “Breaking the Spell” was reviewed by Leon Wieseltier in the New York Times a few weeks ago. The review was not very favorable, to say the least. Further, the review was not very well thought out. What proof of this do I have? Well, this last weekend, the entire letter section of the Sunday New York Times Book review was filled with letters opposing the review of Wieseltier. Ouch. But really I’m only writing this blog post because I really really liked one of the letters. Here it is:

In his review of “Breaking the Spell,” Leon Wieseltier couldn’t resist the reflexive accusation that building a worldview on a scientific base is reductive, and as is often the case, he trotted out the existence of art to capture our sympathies. As a composer, I am weary of being commandeered as evidence of supernatural forces. Unlike Wieseltier, I do not find it difficult to “envisage the biological utilities” of the “Missa Solemnis”; it merely requires a chain with more than one link. Art, particularly religious and nationalistic art, has powerful social effects. Human beings have achieved their stunning success by becoming master cooperators, and emotions that drive us toward shared experience are prominent among the inspirations and outcomes of everything from grand public art to intimate love songs. Our emotion-filled social lives are the direct result of biologically endowed capacities for communication, from language to the delicate network of expressive muscles in our faces, and even our private imaginations bear the imprint. Awareness that I’m participating in this chain of capabilities in no way deprives music of its wonder; it enhances it.
SCOTT JOHNSON
New York

“As a composer, I am weary of being commandeered as evidence of supernatural forces.” Awesome.

Bits, Bits, Wherefore Art Thou Bits?

Black holes evaporate via the process of Hawking radiation. If we take the initial pure state describing a body which will collapse and form a black hole which can evaporate, then it would appear that this pure state will evolve, after the complete evaporation, to a state which is mixed. Since it is doing this without leaving any remnant for quantum information to hide in, this would seem to violate the unitarity of quantum theory, which does not allow a closed system pure state to evolve to a closed system mixed state. Similarly, if we consider the an evaporating black hole, there are spacelike surfaces which contain the collapsing body and the outgoing Hawking radiation: therefore if we somehow believe that the outgoing Hawking radiation contains quantum information which we have thrown into the black hole, this process must have cloned this information and thus violated the linearity of quantum theory. This reasoning has been hounding theoretical physicists for ages and is known as the black hole information paradox.
A few years ago, Horowitz and Maldacena proposed a solution to this problem (see hep-th/0310281, “The black hole final state”.) The basic idea of their solution is to use boudary conditions at the black hole singularity to fix this problem. Actually the basic idea of their solution is to use post-selected quantum teleportation. How does this work? Well consider the initial pure state describing the collapsing matter. In addition to this collapsing matter, there will be the Unruh state of the infalling and outgoing Hawking radiation. Now it works out that this Unruh state is basically a maximally entangled quantum state. So what Horowitz and Maldacena propose is that, in order to get the pure state describing the collapsing matter “out” of the black hole, one need only “teleport” this information by using the Unruh radiation. Indeed, we can consider such teleportation: perform a measurement of the pure state describing the collapsing matter and the state of the infalling Hawking radiation in the appropriate (generalized) Bell basis. Now, of course, in order to complete the teleportation procedure we need to send the result of this measurement to the outgoing radiation and apply the appropriate unitary rotation to get the pure state of the collapsing matter outside of the black hole. But, of course, we can’t do this: we can’t send the classical information from inside the black hole to outside. So what Horowitz and Maldacena propose is that instead of performing the full teleportation, a particular result is post-selected for this teleportation. And this result will be the one where you do nothing in the teleportatin protocol. In other words, they postulate that the black-hole singularity acts like a measurement in which one always gets a particular outcome. This is the “final state” projection postulate.
This is a very nice way to try to avoid the black-hole information paradox. One reason is that it seems to put all the craziness at the black-hole singularity, and not of the craziness elsewhere, where we would expect our current ideas about physics to be rather robustly true. But does this really solve the problem? Gottesman and Preskill, in hep-th/0311269 argue that it does not. What Gottesman and Preskill point out is that the collapsing matter and the infalling Unruh radiation will, in general, interact with each other. In this case, the problem is that this interaction can cause the information to no longer be faithfully reproduced outside of the black hole. The problem is that this interaction causes post-selection onto a state which is no longer maximally entangled: the state will then fail at producing the appropriate teleported copy of the state outside of the black hole. Sometimes, in fact, this interaction can completely destroy the effect Horrowitz and Maldacena are attempting to achieve (if we disentangle the post-selected state.) This does not bode well for this post-selected answer to the black-hole information paradox.
So is this the end? Well it is never the end! One might try to show that the amount of information destroyed in the Gottesman-Preskill scheme is small. Now this, in some sense, would be comforting: who would notice a single qubit missing in a world so large? On the other hand, while this might be comforting, it would certainly cause certain consternation among those of us who would like an explanation which is satisfying without giving up even an ounce of information. This question, of the amount of informatin lost, is addressed, in part, in the recent paper “Almost Certain Escape from Black Holes in Final State Projection Models” by Seth Lloyd, Physical Review Letters, 061302 (2006)
Consider the Horowitz and Maldacena scheme as modified by Gottesman and Preskill with an interaction between the infalling and collapsing matter state described by U. Lloyd calculates the fidelity of this scheme, when averaged over all unitaries U according to the Harr measure (this fidelity is the overlap between the initial pure collapsing matter state and the outgoing state after the Horowitz-Maldacena post-selection.) Lloyd finds that this fidelity is 0.85…. In the paper, Seth argues that this is indeed large enough as to indicate that most of the information survives. So, I ask, is 0.85… fidelity really satisfying? I would argue that it is not, even if I accept that averaging over the unitaries makes any sense at all. Why? Well suppose that you daisy chain this procedure. Then it seems that your average fidelity could be as small as desired. Thus while you might argue that there is only a small loss of unitarity for random unitaries, there are physical processes for which this loss of unitarity is huge. This seems to be one of the lessons of quantum theory: destroying only a little bit of information is hard to do without bringing down the whole wagon. Daniel Gottesman says, I think, exactly this in this New Scientist article. Even Lloyd, at the end of his article, hedges his bets a bit:

Final-state projection will have to await experimental and theoretical confirmation before black holes can
be used as quantum computers. It would be premature to jump into a black hole just now.

What about you? Would you jump into a black hole with the fidelity that we could put you back together of 0.85… on average?