Oftentimes when we encounter the threshold theorem for quantum computation, it is pointed out that there are two assumptions which are necessary for fault-tolerant quantum computation. These two assumptions are
1. A supply of refreshable pure or nearly pure ancillas into which the entropy generated by quantum errors can be dumped throughout the error correction procedure.
2. Parallel operations.
For point 1, an important reference is Aharonov, Ben-Or, Impagliazzo and Nisan, quant-ph/9611028, and for point 2, an important reference is Aharonov and Ben-Or, quant-ph/9611029. These two assumptions are provably necessary. Usually when I see these two points presented, they are presented as if they are something unique to quantum computation. But is this really true? Well certainly not for the first case. In fact quant-ph/9611028 is first about computation in a noisy model of classical reversible computation and second about a noisy model of quantum computation! What about point 2? Well I can’t find a reference showing this, but it seems that the arguments presented in quant-ph/9611029 can be extended to reversible noisy classical computation.
Of course many of you will, by now, be grumbling that I haven’t included the other assumptions usually stated for the threshold theorem for fault-tolerant quantum computation. Okay, so I’ll put it in:
3. A noisy model in relation to control of your system which is not too ridiculous.
Well, okay, usually it is not stated this way, but that’s because I don’t have an easy way to encapsulate the noise models which lead to provable lack of ability to compute fault tolerantly. But certainly there are similar noise constaints for classical computation which are provably necessary for fault-tolerant classical computation.
So what are the difference between the assumptions for fault-tolerant quantum and fault-tolerant classical computation? I strongly believe that the only differences here are difference arising simply going from a theory of probabilities in classical computation to a theory of amplitudes in quantum computation. Of course this short changes the sweet and tears which is necessary to build the theory of fault-tolerant quantum computation, and I don’t want to disparage this in any way. I just want to suggest that the more we learn about the theory of fault-tolerant quantum computation, the more we recognize its similarity to probablistic classical computation. I call this general view of the world the “principal of the ubiquitous factor of two in quantum computing.” The idea being mostly that quantum theory differs from probablistic theory only in the necessity to deal with phase as well as amplitude errors, or more generally, to deal with going from probabilities to amplitudes. This point of view is certainly not even heuristic, but it seems at least that this is the direction we are heading towards.
Of course the above view is speculative, not rigorous, and open to grand debate over beers or the within the battleground of an academic seminar. But it certainly is one of the reasons why I’m optimistic about quantum computing, and why, when I talk to a researcher in higher dimensional black holes (for example) and they express pessimism, I find it hard to put myself in their shoes. To summarize that last sentence, I’m betting we have quantum computers before higher dimensional black holes are discovered 🙂