Robert Laughlin, Nobel prize winner for the theory behind the fractional quantum Hall effect, has a new book out, “A Different Universe.” The book is interesting, but it also has its problems. As you might guess from previous posts I’ve made, Professor Laughlin has a amazing view of quantum computers:
There is a great deal of interest lately in the quantum computer, a fundamentally new kind of computational hardware that would exploit the entanglement of the quantum wave function to perform calculations presently impossible with conventional computers. The most important of these is the generation of enormous primes numbers and the quick factorization of other enormous numbers. The impossibility of factoring a number that is the product of two large primes in reasonable time with conventional computers is the basis of modern cryptography. However, quantum computation has a terrible Achilles heel that becomes clear when one confronts the problem of reading out the answer: the effects that distinguish quantum computers from conventional ones also cause quantum indeterminism. Quantum-mechanical wave functions do indeed evolve deterministically, but the process of turning them into signals people can read generates errors. Computers that make mistakes are not very useful, so the design issue in quantum computation that counts is overcoming mistakes by measurement. A textbook method for doing this is to place a million copies of the same experiment in a small box and measure something they do collectively-generating oscillating magnetic fields, for example, as occurs in a quantum computer built with electron spins. The damage inflicted by the measurement process then affects only a few copies, leaving the rest intact. This trick is so powerful that variations of it enable you to read out the entire wave function of any quantum computer, at least in principle. However a logical implication is that you have created not a fabulous new kind of digital computer but a conventional analogue computer-a type of machine we do not use in the modern era because it is so easily disrupted by noise. Thus the frenzy over quantum computing misses the key point that the physical basis of computational reliability is emergent Newtonianness. One can imagine doing a computation without exploiting these principles, just as one can imagine proving by brute force that broken symmetry occurs, but a much more likely outcome is that eliminating computational mistakes will prove to be fundamentally impossible because its physical basis is absent. The view that this problem is trivial is a fantasy spun out of reductionist beliefs. Naturally, I hope I am wrong, and I wish those who invest in quantum computing the best of luck. I also ask that anyone anxious to invest in a bridge in lower Manhattan contact me right away, for there will be discounts for a limited time only.
Wow. Can I really read this and not respond? I just can’t resist. And especially I just can’t resist snarking. I should apologize for what I’m about to write, but my feeble mind just can’t take it. I just can’t take it anymore! So here are my suggestions for Laureate Laughlin:
1. Please read J. von Neumann’s, “Probabilistic Logics and the Synthesis of Reliable Organism from Unreliable Components.” (1956) The basis of computer reliability has absolutely nothing to do with “Newtonianness”. The basis of conventional computer reliability has to do with redudancy, and more physically with the thermodynamics of many condensed matter systems.
2. After you’ve mastered the most fundamental ideas of fault tolerance, it might be useful to understand the ideas behind error correction. Please read C. Shannon’s “A Mathematical Theory of Communication” (1948). Sure we are going backwards in time, but I think it’s important for you to realize that redundancy (“place a million copies”) is not the only way to encode information. Indeed this fact will become very important as we move on to step 3.
3. Now you’re ready for the big stuff. I know you know quantum theory like the back of your hand, so this step will be easier for you than for many others. Please read John Preskill’s “Fault-tolerant Quantum Computation.” See how the previous two ideas, when slightly modified for the quantum world, lead to a theory of fault-tolerant quantum computers. Isn’t that amazing? I consider it to be one of the most important results in physics in the last thirty years, but you’re older and wiser, so you may feel free to downgrade it. But please don’t desecrate what you haven’t had the time to understand. Quantum error correcting degrees of freedom are most distinctively not the simplistic collective degrees of freedom which you insinuate (“oscillating magnetic fields.”) The idea is more subtle, and thus, I believe more beautiful. While beauty is in the eye of the beholder, you must surely admit that your solution to the fractional quantum Hall effect is only beautiful when you have the background to understand the theory, and so too, quantum fault-tolerance is beautiful, but only once you’ve sat down and understood the theory.
Oh, and by the way, the generation of large prime numbers is easy, not hard, for conventional computers (but you did better than the string theorist Michio Kaku, who I once saw on T.V. claim that quantum computers were amazing because they could efficiently multiply large numbers.)
I’m glad you posted this on the 31st; I wouldn’t have believed it if you posted it on the 1st.
Perhaps people who have been recognized as being at the top of their field can’t resist commenting on anything within physics. What amazes me is that even a first year grad student (in quantum computing) would be able to point out obvious flaws in his reasoning.
Dave, reading that passage made me unexpectedly happy. If even a Nobel physicist can cram that many howlers into a single paragraph when it comes to computation, then I have my work cut out for me for the rest of my life.
I once had an opportunity to talk to Bob about his objections to quantum computing. I had given a colloquium at Stanford on fault tolerant quantum computation, and he made a very public and highly skeptical comment during the question period. Then afterward we went to dinner with Ike Chuang and a few others to continue with a friendly discussion (this was when Ike was still at Stanford). Bob’s views reminded me a lot of discussions I had earlier with Rolf Landauer (though Landauer’s skepticism eventually softened). For both of them, the chief concern was not decoherence, but rather the deeply held belief that a quantum computer could not be fundamentally different than a classical analog computer, and must suffer from similar instabilities. The key distinction between the two is the existence of a tensor product decomposition of the Hilbert space in the quantum case — this is heavily exploited in fault-tolerant quantum protocols, and we can’t use the same tricks to stabilizer analog classical computation. It’s actually a rather subtle point. Perhaps it would be worthwhile for someone to write a carefully crafted response to Laughlin that definitively clarifies the issue, as surely there are others who are puzzled by the same question.
Where did Laughlin suggest that the generation of large prime numbers was hard? I don’t see that anywhere.
Aargh, oops, sorry, it is right there! I somehow thought you were referring to his statement about factoring. Aargh, that’s bad (I mean, for this second “Aargh”, on his part).
It is interesting that a little further on in the book, (p68, 3 pages after your quote), he says he is “very good at coding” (i.e., computer programming). I guess this doesn’t extend to a knowledge of some basic numerical algorithms…
I’m coming to this a year after the original post through a serendipitous google search. Also, I’m not a quantum computation person. Anyway, I think there is a point here in this quote from Laughlin. Computation that uses Hilbert space evolution using a qubit basis is of course possible, we can even do it experimentally with single digit numbers of qubits, with a controlled enough environment. Nonetheless, the world is not just the qubits. Allow me for a moment that what happens in a quantum computer *can* be regarded as a very effectively engineered and mathematically described analogue computer. This view doesn’t get in the way of quantum computation per se, the use of qubits, but it does affect the practical implementation of qubits. I don’t see yet a really good analysis that shows that scaling to a million qubits, say, is possible. Error correction, in particular, needs more qubits, right, and that means more control of decoherence (that’s analogue errors to me), right? My worry is that something like exponentially more control of decoherence is needed (measured in dollars per qubit).
Enjoy!
Error correction, in particular, needs more qubits, right, and that means more control of decoherence (that’s analogue errors to me), right?
Well, first of all decoherence is analog errors in the same way that stochastic errors in classical information science are analog. They aren’t. That’s really one of the main points of the theory of quantum error correction. I suggest you look at my post here on this subject. Second of all the control requirements do NOT go up. That is the point of the threshold thoerem. For a fixed amount of control, you may require more qubits but you will not require more control.
My worry is that something like exponentially more control of decoherence is needed (measured in dollars per qubit).
Well not according to current ideas and especially according to the threshold theorem. See, John Preskill’s “Fault-tolerant Quantum Computation” for a good discussion of the thershold theorem.
Hey Dave,
Maybe you should post your review on Amazon.com with the other customer reviews. (There are quite a number of favorable ones there, unfortunately.)