The Cosmic Computer

Seth Lloyd has a new book out Programming the Universe : A Quantum Computer Scientist Takes On the Cosmos . I just picked up a copy and flipped to this amusing anecdote:

…When Shannon showed his new formula for information to the mathematician John von Neumann and asked him what the quantity he had just defined should be called, von Neumann is said to have replied “H.”
“Why H?” asked Shannon.
“Because that’s what Boltzmann called it,” said von Neumann…

That crazy von Neumann.

Stangest Quantum Computing Book?

Today I was in the University of Washington bookstore and notice an oversized book in the physics section. The top of the book read “Quantum Computing,” so I pulled it out. The subtitle, however, was a bit of a surprise: “The Vedic Fabric of the Digital Universe.” Um. Who is this book for? “This book is for anyone who needs to understand the one-to-one correspondence between quantum computing and the ancient Vedic Literature.” Which makes me wonder if there is a many-to-one correspondence between quantum computing and other religious/philosophical texts? What do I think of the book? Well, this book has some of the most amazing diagrams I have ever seen, or think I will ever see, in a quantum computing book. For a taste you might visit www.vediccomputing.com. That’s what I think of this book.

APS TGQI Best Student Paper Awards

Congrats to the two winners of the first Best Student Paper Awards for the APS Topical Group on Quantum Information, Concepts and Computation: Michael Garrett (Calgary) and Chris Langer (NIST, Boulder) (What you’ve already seen this announcement, congrats! What you’ve not seen this announcement? Must be because your not a member of the topical group. Maybe you should join? Then again who am I to say what you should do!) The awards are sponsored by the Perimeter Institute (theory) and the Institute for Quantum Computing (experiment) and come with a $500 award, but more importantly with fabulous fame! (Hey no jokes about the “American Physical Society” awards in quantum computing both being sponsered by Canadian institutions.)
Here is the abstract from Michael Garrett’s talk:

9:36AM U40.00007 Stochastic One-Way Quantum Computing with Ultracold Atoms in Optical
Lattices , MICHAEL C. GARRETT, University of Calgary, DAVID L. FEDER, University of Calgary — The one-way model of quantum computation has the advantage over conventional approaches of allowing all entanglement to be prepared in a single initial step prior to any logical operations, generating the so-called cluster state. One of the most promising experimental approaches to the formation of such a highly entangled resource employs a gas of ultracold atoms confined in an optical lattice. Starting with a Mott insulator state of pseudospin-1/2 bosons at unit filling, an Ising-type interaction can be induced by allowing weak nearest-neighbor tunneling, resulting in the formation of a cluster state. An alternate approach is to prepare each spin state in its own sublattice, and induce collisional phase shifts by varying the laser polarizations. In either case, however, there is a systematic phase error which is likely to arise, resulting in the formation of imperfect cluster states. We will present various approaches to one-way quantum computation using imperfect cluster states, and show that the algorithms are necessarily stochastic if the error syndrome is not known.

and here is the abstract from Chris Langer’s talk

8:48AM U40.00003 Robust quantum memory using magnetic-field-independent atomic qubits1, C. LANGER, R. OZERI, J. D. JOST, B. DEMARCO2, A. BEN-KISH3, B. BLAKESTAD, J. BRITTON, J. CHIAVERINI, D. B. HUME, W. M. ITANO, D. LEIBFRIED, R. REICHLE, T. ROSENBAND, P. SCHMIDT, D. J. WINELAND — Scalable quantum information processing requires physical systems capable of reliably storing coherent superpositions for times over which quantum error correction can be implemented. We experimentally demonstrate a robust quantum memory using a magnetic-field-independent hyperfine transition in 9Be+ atomic ion qubits at a field B = 0.01194 T. Qubit superpositions are created and analyzed with two-photon stimulated-Raman transitions. We observe the single physical qubit memory coherence time to be greater than 10 seconds, an improvement of approximately five orders of magnitude from previous experiments. The probability of memory error for this qubit during the measurement period (the longest timescale in our system) is approximately 1.4 × 10−5 which is below fault-tolerance threshold for common quantum error correcting codes.

What Men Are Poets Who Can Speak of Jupiter…

Daneil Dennet’s new book “Breaking the Spell” was reviewed by Leon Wieseltier in the New York Times a few weeks ago. The review was not very favorable, to say the least. Further, the review was not very well thought out. What proof of this do I have? Well, this last weekend, the entire letter section of the Sunday New York Times Book review was filled with letters opposing the review of Wieseltier. Ouch. But really I’m only writing this blog post because I really really liked one of the letters. Here it is:

In his review of “Breaking the Spell,” Leon Wieseltier couldn’t resist the reflexive accusation that building a worldview on a scientific base is reductive, and as is often the case, he trotted out the existence of art to capture our sympathies. As a composer, I am weary of being commandeered as evidence of supernatural forces. Unlike Wieseltier, I do not find it difficult to “envisage the biological utilities” of the “Missa Solemnis”; it merely requires a chain with more than one link. Art, particularly religious and nationalistic art, has powerful social effects. Human beings have achieved their stunning success by becoming master cooperators, and emotions that drive us toward shared experience are prominent among the inspirations and outcomes of everything from grand public art to intimate love songs. Our emotion-filled social lives are the direct result of biologically endowed capacities for communication, from language to the delicate network of expressive muscles in our faces, and even our private imaginations bear the imprint. Awareness that I’m participating in this chain of capabilities in no way deprives music of its wonder; it enhances it.
SCOTT JOHNSON
New York

“As a composer, I am weary of being commandeered as evidence of supernatural forces.” Awesome.

Bits, Bits, Wherefore Art Thou Bits?

Black holes evaporate via the process of Hawking radiation. If we take the initial pure state describing a body which will collapse and form a black hole which can evaporate, then it would appear that this pure state will evolve, after the complete evaporation, to a state which is mixed. Since it is doing this without leaving any remnant for quantum information to hide in, this would seem to violate the unitarity of quantum theory, which does not allow a closed system pure state to evolve to a closed system mixed state. Similarly, if we consider the an evaporating black hole, there are spacelike surfaces which contain the collapsing body and the outgoing Hawking radiation: therefore if we somehow believe that the outgoing Hawking radiation contains quantum information which we have thrown into the black hole, this process must have cloned this information and thus violated the linearity of quantum theory. This reasoning has been hounding theoretical physicists for ages and is known as the black hole information paradox.
A few years ago, Horowitz and Maldacena proposed a solution to this problem (see hep-th/0310281, “The black hole final state”.) The basic idea of their solution is to use boudary conditions at the black hole singularity to fix this problem. Actually the basic idea of their solution is to use post-selected quantum teleportation. How does this work? Well consider the initial pure state describing the collapsing matter. In addition to this collapsing matter, there will be the Unruh state of the infalling and outgoing Hawking radiation. Now it works out that this Unruh state is basically a maximally entangled quantum state. So what Horowitz and Maldacena propose is that, in order to get the pure state describing the collapsing matter “out” of the black hole, one need only “teleport” this information by using the Unruh radiation. Indeed, we can consider such teleportation: perform a measurement of the pure state describing the collapsing matter and the state of the infalling Hawking radiation in the appropriate (generalized) Bell basis. Now, of course, in order to complete the teleportation procedure we need to send the result of this measurement to the outgoing radiation and apply the appropriate unitary rotation to get the pure state of the collapsing matter outside of the black hole. But, of course, we can’t do this: we can’t send the classical information from inside the black hole to outside. So what Horowitz and Maldacena propose is that instead of performing the full teleportation, a particular result is post-selected for this teleportation. And this result will be the one where you do nothing in the teleportatin protocol. In other words, they postulate that the black-hole singularity acts like a measurement in which one always gets a particular outcome. This is the “final state” projection postulate.
This is a very nice way to try to avoid the black-hole information paradox. One reason is that it seems to put all the craziness at the black-hole singularity, and not of the craziness elsewhere, where we would expect our current ideas about physics to be rather robustly true. But does this really solve the problem? Gottesman and Preskill, in hep-th/0311269 argue that it does not. What Gottesman and Preskill point out is that the collapsing matter and the infalling Unruh radiation will, in general, interact with each other. In this case, the problem is that this interaction can cause the information to no longer be faithfully reproduced outside of the black hole. The problem is that this interaction causes post-selection onto a state which is no longer maximally entangled: the state will then fail at producing the appropriate teleported copy of the state outside of the black hole. Sometimes, in fact, this interaction can completely destroy the effect Horrowitz and Maldacena are attempting to achieve (if we disentangle the post-selected state.) This does not bode well for this post-selected answer to the black-hole information paradox.
So is this the end? Well it is never the end! One might try to show that the amount of information destroyed in the Gottesman-Preskill scheme is small. Now this, in some sense, would be comforting: who would notice a single qubit missing in a world so large? On the other hand, while this might be comforting, it would certainly cause certain consternation among those of us who would like an explanation which is satisfying without giving up even an ounce of information. This question, of the amount of informatin lost, is addressed, in part, in the recent paper “Almost Certain Escape from Black Holes in Final State Projection Models” by Seth Lloyd, Physical Review Letters, 061302 (2006)
Consider the Horowitz and Maldacena scheme as modified by Gottesman and Preskill with an interaction between the infalling and collapsing matter state described by U. Lloyd calculates the fidelity of this scheme, when averaged over all unitaries U according to the Harr measure (this fidelity is the overlap between the initial pure collapsing matter state and the outgoing state after the Horowitz-Maldacena post-selection.) Lloyd finds that this fidelity is 0.85…. In the paper, Seth argues that this is indeed large enough as to indicate that most of the information survives. So, I ask, is 0.85… fidelity really satisfying? I would argue that it is not, even if I accept that averaging over the unitaries makes any sense at all. Why? Well suppose that you daisy chain this procedure. Then it seems that your average fidelity could be as small as desired. Thus while you might argue that there is only a small loss of unitarity for random unitaries, there are physical processes for which this loss of unitarity is huge. This seems to be one of the lessons of quantum theory: destroying only a little bit of information is hard to do without bringing down the whole wagon. Daniel Gottesman says, I think, exactly this in this New Scientist article. Even Lloyd, at the end of his article, hedges his bets a bit:

Final-state projection will have to await experimental and theoretical confirmation before black holes can
be used as quantum computers. It would be premature to jump into a black hole just now.

What about you? Would you jump into a black hole with the fidelity that we could put you back together of 0.85… on average?

Are You Sure You See?

I’m visiting the KITP in Santa Barbara because they are having a term long workshop on Topological Phases and Quantum Computation (directed by Sander Bais, Chetan Nayak, and John Preskill.) Unfortunately I won’t be able to stay for the entire workshop. But this isn’t as huge of a blow as it would have been years ago, because, the KITP records all of the talks and puts them online. They can be found here.
Yesterday, T. Sentil gave a talk in which he almost made it through his introduction! Anyway one point which he emphasized is something I have always found fascinating. And that is that topological phases (a loose term which I won’t even try to define) might actually be much more common than is widely thought, but the problem is that we don’t have the tools to see them! From the perspective of quantum error correcting codes, this seems fairly natural. In a quantum error correcting code, local probes without at least a global analysis, can not reveal the information encoded into a quantum error correcting code. Indeed if local probes without global analysis could get at our quantum data, then so too could an environment get to this data. Another way to state this is to say that the order parameter for quantum error correcting codes is a big complicated beast whose measurement cannot be done with the simple tools we currently have in our lab.
As a concrete example of this, consider single-molecule molecules. These are crazy molecules which can achieve permanent magnetization through a single molecule effect (usually at fairly low temperatures.) Basically these molecule have a high spin ground state and a high zero-field splitting such that there is a substantial barrier to leaving the high spin ground state. This effect is the result of a collection of interactions among the lower spin systems. What is interesting is the following observation. Take four spin one-half systems and couple them together anti-ferromagnetically with equal couplings between all four systems (mmm, tetrahedrons.) Such a system will have a spin zero ground state which is two fold degenerate. And this two-fold degenerate ground state is actually a single error quantum error detecting code (see quant-ph/0012018)! But why in the world, if you were a chemist, would you go looking for a spin zero molecule. This is the exact opposite of what you would like to find. Further, you won’t be able to see that there is a quantum error detecting code without a knob which allows you to split the degeneracy of this ground state. And doing this is not a trivial task. In short, exactly since we don’t have the tools to observe this effect, we won’t really be interested in it. You need a crazy theorist to tell you that maybe you should think about trying to engineer such a knob onto your system. What use is a crazy theorist except to tell you crazy stories.
Of course, thinking like this, that there might be hidden orders which we are not smart enough to discover is a good way to make yourself paranoid. What might we be hiding from because our glasses are so opaque? Certainly the role of instrumentation in science is one I find fascinating and, I must admit, a bit scary.

So I'll Just Introduce Him As…

An amusing anecdote from Fischer Black and the Revolutionary Idea of Finance by Perry Mehrling. For background, Paul Samuelson won the Nobel prize in Economics (okay, so it’s not really a true Nobel, but, whatever) in 1970 and is considered one of the founders of modern neoclassical economics. Robert Merton won the (psuedo?) Nobel prize in Economics in 1997 (along with Myron Scholes) for work in understanding stock options. At the time of this anecdote, Samuelson was a professor at MIT just a few years before he received in Nobel prize, and Merton was a young graduate student.

…in October 1968 Samuelson was slated to give the inaugural lecture for the new MIT-Harvard Joint Seminar in Mathematical Economics. It was to be a big event, held in a special room in Holyoke Center at harvard, and Samuelson’s name was the featured draw on all the publicity notices. All the big Harvard names would be there-Kenneth Arrow, Wassily Leontief, Zvi Griliches, Robert Dorfman, Hendrink Houthakker-and important faculty would be visiting from other areas universities as well. As a further mark of the event’s special status, no students would be allowed to attend-no sgudents, that is, except for Robert Merton, who Samuelson arranged to give the talk in his place, but without telling the organizers.
When the moment came, after the requisite ceremony and introduction, Samuelson stood up at one end of the long conference table and spoke. “This is a joint paper, and my co-author will present it. I’d like to introduce him as a professor, but he is not a professor. I’d like to introduce him as as Doctor, but he has no Ph.D. So I’ll just introduce him as Mr. Robert Merton.”

Dance Ions! Dance!

A reliable source (heh, that’s funny isn’t it) tells me that the ion trap dance I discussed here was the “Open Box Salsa.” Because, you know, I’m sure you all really wanted to know that.
(Which of course reminds of a story. So an old teaching technique for those who have trouble spelling (like me) is to learn to visualize the world you are trying to spell while looking, say, to the up and right. You know, that introspective thing you do when you are searching for some bit of knowledge. The idea is to leverage that to help your spelling. Well a few years back I decided that what was more important than remembering things was to forget things. Indeed, like the above information, I’ll bet you were better off without it. So I started training myself to look down when I am trying to forget something. Not sure if it ever really worked. Or at least I don’t remember if it’s worked.)

What Are the Assumptions of Classical Fault-Tolerance?

Oftentimes when we encounter the threshold theorem for quantum computation, it is pointed out that there are two assumptions which are necessary for fault-tolerant quantum computation. These two assumptions are

1. A supply of refreshable pure or nearly pure ancillas into which the entropy generated by quantum errors can be dumped throughout the error correction procedure.
2. Parallel operations.

For point 1, an important reference is Aharonov, Ben-Or, Impagliazzo and Nisan, quant-ph/9611028, and for point 2, an important reference is Aharonov and Ben-Or, quant-ph/9611029. These two assumptions are provably necessary. Usually when I see these two points presented, they are presented as if they are something unique to quantum computation. But is this really true? Well certainly not for the first case. In fact quant-ph/9611028 is first about computation in a noisy model of classical reversible computation and second about a noisy model of quantum computation! What about point 2? Well I can’t find a reference showing this, but it seems that the arguments presented in quant-ph/9611029 can be extended to reversible noisy classical computation.
Of course many of you will, by now, be grumbling that I haven’t included the other assumptions usually stated for the threshold theorem for fault-tolerant quantum computation. Okay, so I’ll put it in:

3. A noisy model in relation to control of your system which is not too ridiculous.

Well, okay, usually it is not stated this way, but that’s because I don’t have an easy way to encapsulate the noise models which lead to provable lack of ability to compute fault tolerantly. But certainly there are similar noise constaints for classical computation which are provably necessary for fault-tolerant classical computation.
So what are the difference between the assumptions for fault-tolerant quantum and fault-tolerant classical computation? I strongly believe that the only differences here are difference arising simply going from a theory of probabilities in classical computation to a theory of amplitudes in quantum computation. Of course this short changes the sweet and tears which is necessary to build the theory of fault-tolerant quantum computation, and I don’t want to disparage this in any way. I just want to suggest that the more we learn about the theory of fault-tolerant quantum computation, the more we recognize its similarity to probablistic classical computation. I call this general view of the world the “principal of the ubiquitous factor of two in quantum computing.” The idea being mostly that quantum theory differs from probablistic theory only in the necessity to deal with phase as well as amplitude errors, or more generally, to deal with going from probabilities to amplitudes. This point of view is certainly not even heuristic, but it seems at least that this is the direction we are heading towards.
Of course the above view is speculative, not rigorous, and open to grand debate over beers or the within the battleground of an academic seminar. But it certainly is one of the reasons why I’m optimistic about quantum computing, and why, when I talk to a researcher in higher dimensional black holes (for example) and they express pessimism, I find it hard to put myself in their shoes. To summarize that last sentence, I’m betting we have quantum computers before higher dimensional black holes are discovered 🙂

Startup Pages

For many years I have used a simple customized startup webpage for all of the quick links I need. For many years I have wanted to write some code to allow me to customize this startup page. But, now I don’t have to do this because I’ve discovvered: Protopage. I’ve just started using it, but so far, so good.