An interesting question at the talk Sunday was, suppose you build a quantum computer, how do you really know that these crazy quantum laws are governing the computer? What I love about this question is that you can turn it around and ask the same question about classical computers! At heart we believe that if we put together a bunch of transistors to form a large computation, each of these components is behaving itself and there is nothing new going on as the computer gets larger and larger. But for quantum computers there is a form of skepticism which says that once your computer large enough, the description of quantum theory will fail in some way (See for example, Scott Aaronson’s “Are Quantum States Exponentially Long Vectors?”, quant-ph/0507242, for a nice discussion along these lines.) But wait, why shouldn’t we ask the question of whether classical evolution continues to hold for a computer as it gets bigger and bigger. Of course, we have never seen such a phenomenon as far as I know. But if I were really crazy, I would claim that the computer just isn’t big enough yet. And if I were even crazier I would suggest that once a classical computer gets to a certain size its method of computation changes drastically and allows for a totally different notion of computational complexity. And if I were flipping mad, I would suggest that we do have an example of such a system, and this system is our brain. Luckily I’m only crazy, not flipping mad, but still this line of reasoning is fun to pursue.
Well, there are a lot of examples of quantum mechanical systems decohering, so that is pretty easy for me to imagine. I can’t think of an example of a system of classical semiconductors going through some state change that affects the rules they operate on. I mean, other than melting them.
Do you have another example in mind besides the brain? Am I even thinking about this the right way?
Well certainly when we build very very small transistors they will fail because of their coupling to their environment. So I would say that there is an analogy of decohering: it’s just classical noise, but right now our computers aren’t small enough for this to be a major problem. If I were really really crazy, I would say that this is the reason we haven’t seen the effect yet: we haven’t build noisy enough computers. But I’m only one crazy, not two crazies.
You’re thinking about this the right way! I don’t have another example off the top of my head.
I’ve always thought that what was meant by a “fundamental theory” was that it could always be used as a blueprint or pattern to systematically scale up any device governed by the theory. If we know how to build a 3 (qu)bit device with a theory that is fundamental (rather than approximate), we know how to build an n (qu) bit device. When someone says that, say, nonlinear equations govern a system, they don’t mean that we know how, for a given number of bits, to expand a device constructed according to the nonlinear theory to that number plus one.
The assumption that the brain is a classical computer — philosophers have been calling this the “neuron doctrine”.
But doesn’t this assume that the brain itself is a classical computer ?
Suresh has spoken the forbidden words. Forbidden I say! Oh, wait, Roger Penrose is free to post anything he wants in my comment section 😉
Scott Aaronson has a fun rant on this subject in quant-ph/0303041 where he muses about the consequences of general relativity for computational complexity (along with more quantum-type consequences arising from holography and the Planck scale). Scott’s talk on this is quite entertaining, although when I heard it, Seth Lloyd was in the audience and he publicly and strongly encouraged Scott not to pursue this line of thinking until after he was tenured. 🙂