Quick, to the Ivory Towers!

Particle physicists have always considered themselves the kings of physics. Murray Gell-Mann famously called solid state physics by the moniker “squalid state physics.” In the ivory towers where scientists picture themselves as selfless serfs in the service of knowledge, particle theorists have long occupied the attic. At the same time, there is another community of the mathematically inclined who claim that they do their work for the greater good of knowledge: programmers. In particular the open source spirit of programming, that good code is in some way eternal and should be shared and contributed to the greater cause, gives good coders an air of superiority not dissimilar to that found in particle theorist.
And when I think about these two fields, I begin to think that perhaps quantum computing is today’s version of the selfless king in search of knowledge. Not only are we learning about the fundamental ways in which quantum information and computation differs from classical information and computation, I think many of us in the quantum computing community also feel that our work will have some greater consequence once a quantum computer is eventually built. We are, therefore, I think a rather smug community not very dissimilar to particle theory or the ethic of the eternally beautiful algorithm. Whether this smugness will be our undoing, our triumph, or our own psychosis with which we will beat ourselves over the head is another question.

Scientific Thank You

Who is the most thanked person in computer science? According to an analysis of the CiteSeer database performed by Giles and Councill, the most thanked person in computer science is Olivier Danvy. I was also interested to see that the institutions I’ve been a member of, Caltech, Berkeley, and the Santa Fe Institute, are all in the top ten of most thanked educational institutions (third, seventh, and fourth respetively.) I can understand why the Santa Fe Institute is high on the most thanked list, their extensive visitor and workshop programs are a great way to generate acknowledgements, but I was a bit shocked by how high Caltech was on this list.

Quantum Dollars

At the workshop here a claim was made that the world has now spent over one billion dollars in the field of quantum information science since the discovery of Shor’s algorithm in 1994. How many billions more before a useful quantum computer is built?

Ion Trap Milestone

The slow steady advance in ion traps! A milestone: Realization of quantum error correction,” J. Chiavernini, D. Leibfried, T. Schaetz, M. D. Barrett, R. B. Blakestad, J. Britton, W. M. Itano, J. D. Jost, E. Knill, C. Langer, R. Ozeri & D. J. Wineland, Nature 432, 602–605 (2004)

Scalable quantum computation and communication require error control to protect quantum information against unavoidable noise. Quantum error correction protects information stored in two-level quantum systems (qubits) by rectifying errors with operations conditioned on the measurement outcomes. Error-correction protocols have been implemented in nuclear magnetic resonance experiments, but the inherent limitations of this technique prevent its application to quantum information processing. Here we experimentally demonstrate quantum error correction using three beryllium atomic-ion qubits confined to a linear, multi-zone trap. An encoded one-qubit state is protected against spin-flip errors by means of a three-qubit quantum error-correcting code. A primary ion qubit is prepared in an initial state, which is then encoded into an entangled state of three physical qubits (the primary and two ancilla qubits). Errors are induced simultaneously in all qubits at various rates. The encoded state is decoded back to the primary ion one-qubit state, making error information available on the ancilla ions, which are separated from the primary ion and measured. Finally, the primary qubit state is corrected on the basis of the ancillae measurement outcome. We verify error correction by comparing the corrected final state to the uncorrected state and to the initial state. In principle, the approach enables a quantum state to be maintained by means of repeated error correction, an important step towards scalable fault-tolerant quantum computation using trapped ions.

CEPI Seminar 10/10/04 Wim van Dam

Complexity, Entropy, and the Physics of Information Lecture Series
Wednesday, November 10, 2004, 5:00 PM. Refreshments 4:15 PM.
Robert N. Noyce Conference Room, Santa Fe Insitute
Wim van Dam
Computer Science Dept., University of California, Santa Barbara
Quantum Computing, Zeroes of Zeta Functions & Approximate Counting
Abstract:
In this talk I describe a possible connection between quantum computing and Zeta functions of finite field equations that is inspired by the ‘spectral approach’ to the Riemann conjecture. The assumption is that the zeroes of such Zeta functions correspond to the eigenvalues of finite dimensional unitary operators of natural quantum mechanical systems. To model the desired quantum systems I use the notion of universal, efficient quantum computation.
Using eigenvalue estimation, such quantum systems are able to approximately count the number of solutions of the specific finite field equations with an accuracy that does not appear to be feasible classically. For certain equations (Fermat hypersurfaces) I show that one can indeed model their Zeta functions with efficient quantum algorithms, which gives some evidence in favour of the proposal.

Fate

One of the reasons I got interested in physics was because I have always been interested in the “question of free will.” Physicists don’t like to talk about free will much, especially since learning what quantum theory has to say about free will seems to put you smack dab in the middle of the measurement problem in quantum theory. In many ways, what I’m most interested in is not the question of free will, which I find too often to be an overly anthropocentric enterprise, but more the question of the determinism / indeterminism of physics. But the “free will question” has played a major role in shaping why I choose to do physics.
As so the question becomes: why was I interested in free will? Most of it is surely due to my older sister Cathy. You see Cathy is a little person. No one knows exactly what syndrome she has, but it causes her to be lopsided (one arm and leg shorter than the other), she has very poor vision and hearing, and has some mental difficulties. This makes it all sound really bad: which it is definitely not because Cathy is an amazing light in our family. She works at the local library in Yreka, loves to listen to her John Denver tapes, she loves to watch Jeopardy, and is, in general, a very happy person who brightens the lives of her many many friends.
But if you grow up with a sister like Cathy you can not avoid thinking about why you ended up the way you are and why she ended up the way she is? Was it fate and totally out of the hands of human choice? Science, and physics in particular, is the path one is reduced to in order to possibly find any answer to such a question. While we can argue forever whether reductionism to fundamental physics is central to answering this question, there can be no doubt that understanding the role of determinism and indeterminism in physics will have a profound impact on our view of this question.
On the other hand, Richard Feynman said: “Do not ask yourself… ‘how can it be like that?’ because you will lead yourself down a blind alley in which no one has ever escaped.” I don’t think Feynman was talking about science here: scientists spend much of their time answering how it can be like that. I think Feynman was talking about asking for reasons which somehow satisfy us as humans: answers that will give us short sentences explaining why. There are simple important questions which might have simple concise explanations, but finding these explanations seems impossibly difficult. And this is how I find myself coming full circle. Because this point of view, that there are simple questions for which there aren’t answers which can be found in a short time (and once we find them, we’ll know we’ve answered the question) is basically the complexity class NP. Which is computer science. The field, besides physics, which I most deeply admire.
So fate not only made me a physicist, but it also made me a computer scientist.
And the only question left remaining is whether or not it was destiny that I was born at a time when I could participate in the unfolding of the field of quantum computing, which merges physics and computer science like never before?

Schurly You're Joking Dr. Bacon

A new paper, a new paper! If you love the theory of the addition of angular momentum, and don’t we all just love the theory of the addition of angular momentum, then you will really love the new paper we (Isaac Chuang and Aram Harrow) just put on the arXiv. Unfortunately my spell check changed the title to Clench-Gordon and I didn’t notice. So I expect a lot of nasy emails complaining about the title. Doh. Well that’s what the replace button is for, I guess. Here is the paper:
quant-ph/0407082
Efficient Quantum Circuits for Schur and Clebsch-Gordon Transforms
Authors: Dave Bacon, Isaac Chuang, Aram Harrow
Comments: 4 pages, 3 figures

The Schur basis on n d-dimensional quantum systems is a generalization of the total angular momentum basis that is useful for exploiting symmetry under permutations or collective unitary rotations. We present efficient (size poly(n,d,log(1/epsilon)) for accuracy epsilon) quantum circuits for the Schur transform, which is the change of basis between the computational and the Schur bases. These circuits are based on efficient circuits for the Clebsch-Gordon transformation. We also present an efficient circuit for a limited version of the Schur transform in which one needs only to project onto different Schur subspaces. This second circuit is based on a generalization of phase estimation to any nonabelian finite group for which there exists a fast quantum Fourier transform.

Full-text: PostScript, PDF, or Other formats

When We Live Forever

Things fall apart. Normally we think about our computers as deterministic machines which never make a mistake. But with a very small probability, your hardware will make a mistake (information in a storage device is probably the most likely place where this will occur.) The point, however, is that for the task we need our computers, writing an email, ordering a product from amazon.com, etc, the rate of failure does not come into play.
Now suppose that humanity learns to prolong its lifespan to some enormous timescale. Will this change our fundamental concept of what a computer is? When the errors of a computer factor into your life in a real, albeit slow, way, will we think of computers in the same way we do today?
Computers are not invincible. It is not clear to me that their method of achieving fault-tolerance is even the best or most effective method for computation. When we build our computers small, so small that errors become unescapable, will we continue to try to maintain the model of the transistor and the near deterministic completely controlled system? Or will we take a cue from biology and maybe find that complex erring systems can be programmed in ways we haven’t thought of yet?

Searching for Bobby Fisher

One of the cool results of computer science which I recently learned is Levin’s universal search. Suppose you have a function f from a set X to a set Y. Many problems in computer science ask for you to invert this problem, i.e. given a y in Y, find x such that f(x)=y (there may be multiple such x’s but let’s not worry too much about that let’s just say you want to find an x such that f(x)=y.) Further lets assume that we have an efficient method for evaulating f(x). Thus, we may efficiently test whether a possible solution, x, satisfies the desired f(x)=y. OK, so that’s the problem. How do you solve it?
Well here is what Levin proposed. Consider running all possible programs which can solve this problem. We will do this serial, but with the time spent running each program which is proportional to 2^(-l(p)) where l(p) is the length of the program p. So we will be running shorter programs more often than we are running longer programs. There is a program q, which is the fastest program for solving our problem. This program runs in time T(q). Eventually our program will execute this program and get the correct result. How long does this take? A quick calculation shows that this time is at works 2^(l(q)) (T(q)+Tv) where Tv is the time to verify that the answer is correct (not included in T(q)). Thus, up to 2^(l(q)) the program is the fastest possible. Think about a class of programs now: notice that we will have produced an algorithm for this inversion problem which is, up to this huge multiplicative constant optimal. If we were to use big-O notation, this constant would be hidden and we would say assymptotically we have the most efficient algorithm!
Needless to say, the multiplicative constant DOES matter and in most cases is ridiculously large. But the story doesn’t end here. Recently Marcus Hutter has shown how to make this multiplicative constant 5 (Hutter’s work also solves a broad classes than the one described above.) What’s the catch this time? The catch this time is that there is an additive constant which is huge!