Robert Laughlin, Nobel prize winner for the theory behind the fractional quantum Hall effect, has a new book out, “A Different Universe.” The book is interesting, but it also has its problems. As you might guess from previous posts I’ve made, Professor Laughlin has a amazing view of quantum computers:
There is a great deal of interest lately in the quantum computer, a fundamentally new kind of computational hardware that would exploit the entanglement of the quantum wave function to perform calculations presently impossible with conventional computers. The most important of these is the generation of enormous primes numbers and the quick factorization of other enormous numbers. The impossibility of factoring a number that is the product of two large primes in reasonable time with conventional computers is the basis of modern cryptography. However, quantum computation has a terrible Achilles heel that becomes clear when one confronts the problem of reading out the answer: the effects that distinguish quantum computers from conventional ones also cause quantum indeterminism. Quantum-mechanical wave functions do indeed evolve deterministically, but the process of turning them into signals people can read generates errors. Computers that make mistakes are not very useful, so the design issue in quantum computation that counts is overcoming mistakes by measurement. A textbook method for doing this is to place a million copies of the same experiment in a small box and measure something they do collectively-generating oscillating magnetic fields, for example, as occurs in a quantum computer built with electron spins. The damage inflicted by the measurement process then affects only a few copies, leaving the rest intact. This trick is so powerful that variations of it enable you to read out the entire wave function of any quantum computer, at least in principle. However a logical implication is that you have created not a fabulous new kind of digital computer but a conventional analogue computer-a type of machine we do not use in the modern era because it is so easily disrupted by noise. Thus the frenzy over quantum computing misses the key point that the physical basis of computational reliability is emergent Newtonianness. One can imagine doing a computation without exploiting these principles, just as one can imagine proving by brute force that broken symmetry occurs, but a much more likely outcome is that eliminating computational mistakes will prove to be fundamentally impossible because its physical basis is absent. The view that this problem is trivial is a fantasy spun out of reductionist beliefs. Naturally, I hope I am wrong, and I wish those who invest in quantum computing the best of luck. I also ask that anyone anxious to invest in a bridge in lower Manhattan contact me right away, for there will be discounts for a limited time only.
Wow. Can I really read this and not respond? I just can’t resist. And especially I just can’t resist snarking. I should apologize for what I’m about to write, but my feeble mind just can’t take it. I just can’t take it anymore! So here are my suggestions for Laureate Laughlin:
1. Please read J. von Neumann’s, “Probabilistic Logics and the Synthesis of Reliable Organism from Unreliable Components.” (1956) The basis of computer reliability has absolutely nothing to do with “Newtonianness”. The basis of conventional computer reliability has to do with redudancy, and more physically with the thermodynamics of many condensed matter systems.
2. After you’ve mastered the most fundamental ideas of fault tolerance, it might be useful to understand the ideas behind error correction. Please read C. Shannon’s “A Mathematical Theory of Communication” (1948). Sure we are going backwards in time, but I think it’s important for you to realize that redundancy (“place a million copies”) is not the only way to encode information. Indeed this fact will become very important as we move on to step 3.
3. Now you’re ready for the big stuff. I know you know quantum theory like the back of your hand, so this step will be easier for you than for many others. Please read John Preskill’s “Fault-tolerant Quantum Computation.” See how the previous two ideas, when slightly modified for the quantum world, lead to a theory of fault-tolerant quantum computers. Isn’t that amazing? I consider it to be one of the most important results in physics in the last thirty years, but you’re older and wiser, so you may feel free to downgrade it. But please don’t desecrate what you haven’t had the time to understand. Quantum error correcting degrees of freedom are most distinctively not the simplistic collective degrees of freedom which you insinuate (“oscillating magnetic fields.”) The idea is more subtle, and thus, I believe more beautiful. While beauty is in the eye of the beholder, you must surely admit that your solution to the fractional quantum Hall effect is only beautiful when you have the background to understand the theory, and so too, quantum fault-tolerance is beautiful, but only once you’ve sat down and understood the theory.
Oh, and by the way, the generation of large prime numbers is easy, not hard, for conventional computers (but you did better than the string theorist Michio Kaku, who I once saw on T.V. claim that quantum computers were amazing because they could efficiently multiply large numbers.)