Oftentimes when we encounter the threshold theorem for quantum computation, it is pointed out that there are two assumptions which are necessary for fault-tolerant quantum computation. These two assumptions are
1. A supply of refreshable pure or nearly pure ancillas into which the entropy generated by quantum errors can be dumped throughout the error correction procedure.
2. Parallel operations.
For point 1, an important reference is Aharonov, Ben-Or, Impagliazzo and Nisan, quant-ph/9611028, and for point 2, an important reference is Aharonov and Ben-Or, quant-ph/9611029. These two assumptions are provably necessary. Usually when I see these two points presented, they are presented as if they are something unique to quantum computation. But is this really true? Well certainly not for the first case. In fact quant-ph/9611028 is first about computation in a noisy model of classical reversible computation and second about a noisy model of quantum computation! What about point 2? Well I can’t find a reference showing this, but it seems that the arguments presented in quant-ph/9611029 can be extended to reversible noisy classical computation.
Of course many of you will, by now, be grumbling that I haven’t included the other assumptions usually stated for the threshold theorem for fault-tolerant quantum computation. Okay, so I’ll put it in:
3. A noisy model in relation to control of your system which is not too ridiculous.
Well, okay, usually it is not stated this way, but that’s because I don’t have an easy way to encapsulate the noise models which lead to provable lack of ability to compute fault tolerantly. But certainly there are similar noise constaints for classical computation which are provably necessary for fault-tolerant classical computation.
So what are the difference between the assumptions for fault-tolerant quantum and fault-tolerant classical computation? I strongly believe that the only differences here are difference arising simply going from a theory of probabilities in classical computation to a theory of amplitudes in quantum computation. Of course this short changes the sweet and tears which is necessary to build the theory of fault-tolerant quantum computation, and I don’t want to disparage this in any way. I just want to suggest that the more we learn about the theory of fault-tolerant quantum computation, the more we recognize its similarity to probablistic classical computation. I call this general view of the world the “principal of the ubiquitous factor of two in quantum computing.” The idea being mostly that quantum theory differs from probablistic theory only in the necessity to deal with phase as well as amplitude errors, or more generally, to deal with going from probabilities to amplitudes. This point of view is certainly not even heuristic, but it seems at least that this is the direction we are heading towards.
Of course the above view is speculative, not rigorous, and open to grand debate over beers or the within the battleground of an academic seminar. But it certainly is one of the reasons why I’m optimistic about quantum computing, and why, when I talk to a researcher in higher dimensional black holes (for example) and they express pessimism, I find it hard to put myself in their shoes. To summarize that last sentence, I’m betting we have quantum computers before higher dimensional black holes are discovered 🙂
I think there is a long tradition in the cellular automaton community of noting the need for parallelism to achieve fault tolerance. It is certainly emphasized in the papers by Peter Gacs. Which is why we referred to Gacs in noting that these requirements apply to classical as well as quantum fault tolerance in Sec. II of http://arxiv.org/abs/quant-ph/0110143
I meant to add that the reason we should emphasize these conditions for fault tolerance when giving talks about quantum fault tolerance (even though the same conditions must be satisfied classically) is that it is important, when considering the promise of any proposal for physically realizing quantum gates, to assess how well that realization will meet these conditions. That parallel operation and a place to dump entropy are both necessary is sometimes overlooked.
Either you’re secretly hosting a mirror of arxiv.org, or your URLs need fixing.
Oops. Doh. Fixed.
“I strongly believe that the only differences here are difference arising simply going from a theory of probabilities in classical computation to a theory of amplitudes in quantum computation.”
If run-of-the-mill classical computers were reversible, then I might agree.
What’s the largest scale classical reversible computer ever built?
Ah but going to irreversible classical is like going to quantum theory with measurements, so I will restate:
“I strongly believe that the only differences here are differences arising simply going from a theory of probabilities in classical computation to a theory of amplitudes and probabilities in quantum computation.”
😉
It appears to be correct that the requirements for fault tolerance in the quantum case are very similar (if not identical ) to those in the classical case. What is unclear is what is the correct interpretation of this similarity. Does this means that since classical computation exists, quantum computers (and the “new phase of matter” they seem to represent, according to Dave) must be feasible as well? Or is it that we should have a second look at “ridiculous” noise models of item 3. (I’d be happy to see many and detailed such examples even if not encapsulated.)
A technical point: as far as I know, the (indeed, important) result about noisy reversible computation still leaves open the
possibility that a noiseless classical computer
plus a noisy reversible quantum computer can factor in a polynomial time. Finding any model of low-rate noise (as ridiculous as one wishes) which (provably) does not enable even log-depth computing seems difficult.
Gil, I thought that the modular exponentiation part of Shor’s algorithm wasn’t log-depth. Am I misinterpreting what you are saying?
Also, of course, I’m a grumpy old man, so I will always maintain that the model of noiseless classical computers does not exist (nor does the model of noiseless quantum computers exist.) But that’s just what I get for hanging out with too many experimental physicists.
I meant that noiseless classical computers + log depth (noiseless) quantum computers are strong enough to enable factoring in polynomial time, (as far as I know, by a result of R. Cleve and J. Watrous, quant-ph/0006004.)
Ah right, thanks! Forgot about that. The idea is you compute the bitwise exponentiated terms classically in poly time and then do an interated multiplication a la Beame, Cook, and Hoover for the quantum part of the circuit.
Under the two assumptions that Dave mentioned the threshold theorem describes one important obstruction for fault tolerance:
(1) – The noise rate is large.
This obstruction deals with a single qubit/bit. We need low noise/signal ratio. Indeed there is essentially no difference between classical and quantum systems, and the feasibility of classical computing indicates that intolerably high noise rate is not a universal property of noisy computing devices.
However, the following possible principal is also an obstruction
(2) – The correlation between the noise acting on pairs of correlated elements in a noisy device is high.
This obstruction deals with pairs of qubits. We need to require low signal/noise ratio also in terms of correlations. Again I see no difference between classical and quantum devices and am curious if (2) generally applies to noisy physical devices (classical or quantum) whose behavior is stochastic. Such a principal appears to allow fault-tolerant classical computing, but be harmful to quantum computing. (The assumptions of the threshold theorems exclude the possibility of (2))
Of course, one can ask how such a principal can come about. We do not want to seriously consider the fantastic scenarios of terrorist attacks Dave mentioned in his previous post, but rather stochastic models for noise which are based on local operations, namely the noise being created by a sort of stochastic (rather primitive) quantum computer running aside our physical device. The question is whether such models can create damaging patterns of noise. This looks interesting to me and starting with “ridiculous” models seems rather reasonable. Alternatively (or in parallel), one can try to test (2) empirically.
“I strongly believe that the only differences here are differences arising simply going from a theory of probabilities in classical computation to a theory of amplitudes and probabilities in quantum computation.â€
Dave, I’m surprised you’re discounting the importance of the quantum tensor product structure (especially after you made such nice use of it in your “Operator Quantum Error Correcting Subsystems for Self-Correcting Quantum Memories” paper). What about that old dictum “fight entanglement with entanglement”? It does not seem to apply to classical fault tolerance.
Daniel,
The classical dictum is “fight correlation with correlation.” A classical noise process is simply getting correlated with a classical environment. If you don’t create correlated classical states you won’t be able to protect your classical information (encoding into a classical error correcting code is nothing more than the limit of creating perfect correlation.)
Also I don’t think a tensor product structure alone makes the difference between the power of classical and quantum computers. The difference is when you combine complex amplitudes with the tensor product.
Or at least in my myopic view of the world 🙂
Gil,
I don’t understand what you mean that correlated errors seem to allow classical but not allow quantum computation. Certainly if I come up with ridiculous (to use my favorite word 😉 ) models of correlated classical errors they too will destroy classical computation. I’m myopic, so right now the only way I can see engineering worlds with classical but not quantum computation is to engineer a world in which “phase” type decoherence is massive or crazily correlated but “amplitude” type decoherence is not.
Dave,
As a Bell’s theorem entrepeneur I’m sure you’d agree there is a qualitative difference between classical correlation and quantum entanglement. (Well maybe your work on the communication cost of simulating Bell correlations would actually indicate just a quantitative difference.)
And indeed, I meant to point out that there is more to quantum computing than complex amplitudes: the power comes from combining the latter with the tensor product structure.
Right, so we agree that I that there is more to the the power of quantum computing than complex amplitudes. But I would argue (the original direction of your posting) that there is also more to the difference between quantum and classical fault tolerance than amplitudes versus probabilities. And that more is (I suspect) at least in part the quantum tensor product structure (TPS).^*
I do agree that entanglement is not the single ingredient one must add to amplitudes to understand the quantum/classical difference. This is why I emphasized quantum TPS rather than entanglement. Indeed, if you accept the “generalized entanglement” idea of Barnum et al. (quant-ph/0305023)^** then entanglement exists independently of a TPS. Namely, they have examples of generalized entanglement where no subsystem partition exists (such as a single spin-1 particle). I suspect such entanglement plays no role in the quantum/classical speedup or fault tolerance distinction.
^* And that TPS is not an absolute property of the Hilbert space, but rather a property that is relative to the experimentally accessible set of observables: Quantum Tensor Product Structures are Observable Induced
^** A pure state is generalized unentangled relative to a set of distinguished observables if its reduced state is pure, and generalized entangled otherwise.
Daniel: see we agree!
I agree that there is more the the power of quantum computing than complex amplitudes. But there is more to the power of classical computing in classical computing, too!
It is an interesting question, why do quantum computers seem to outperform classical computers. But we can also ask, for example, why do classical computers outperform the Clifford group gate set with computatinal basis preparation and measurement. Clearly Clifford group gates can generate entanglement, are quantum, but are also not universal for classical computation and can be simulated. Why are classical computers more powerful than this type of quantum computer (which is the easiest model to implement fault-tolerantly, interestingly)?
I personally suspect that the reason we get into thinking about the power of quantum computation is that we are in the very early days of quantum algorithms. Certainly entanglement has something to do with it, in the sense that it is “necessary
(in the Jozsa and Linden sense), but it is not sufficient. So trying to explain the power of quantum computers as coming just from a tensor product with a complex vector space, doesn’t seem to me to be the answer.
And this is probably great! Because it means that there is much we need to explore. The answer, I suspect, won’t be a simple one, but more likely it will envolve some very sophisticated theory. Can you tell that I’m sitting in a computer science department, now?
We agree.
The difference may be that I think calling generalized entanglement “entanglement” is a bit heavy handed. I am very enthusiastic about the idea behind “generalized entanglement,” and especially its usefulness in many settings (as, for example, in condensed matter systems where it leads to a definition of purity which corresponds at least in some systems to an order paramter distinguishing different phases of the system), but I personally think that calling it entanglement is very confusing.
For example consider the example you site, a spin-1 particle. One takes a three dimensional irrep of su(2) and then looks at observables of the generators. These form a three dimensional sphere and the extremal points on the boundary are the spin-coherent states. Then a state like |m=0> is not on the boundary and is therefore not a “pure” state with respect to these observables. They then proceed to call this state a “generalized entangled” state, since entangled subsystem states are distinguished by being the states that are mixed when tracing over the full pure state. I do agree that this is an interesting (and in many cases useful) definition, but what does this have to do with anything being entangled with anything else? I would instead talk about states which are pure with respect to an observable algebra and states which are mixed with respect to an observable algebra. Then the interesting thing about “generalized entanglement” has nothing to do with entanglement but everything to do about pure states with respect to one algebra which are mixed with respect to a different algebra.
But that’s just my opinion. And I just got my wisdom teeth out, so clearly I’m not thinking clearly.
Dave, “Gil, I don’t understand what you mean that correlated errors seem to allow classical but not allow quantum computation. Certainly…they too will destroy classical computation”
Actually, there are some forms of highly correlated noise which allow classical computation and harm quantum computation but this was NOT the point I was making above. (Obstruction (2) from my previous comment implies no correlations for errors in the classical case when the bits themselves are not correlated.)
“The only way I can see engineering worlds with classical but not quantum computation…”
Let us engineer a world, then, Dave. The requirement is this: (obstruction (2) to quantum fault tolerance from my comment above.)
(*) In any noisy system the noise “acting” on pairs of elements which are substantially correlated is also substantially correlated.
(Well, the word “noisy” refer to what happens when we approximate a small subsystem of a large system by neglecting the effect of the “irrelevant” elements (and by having an approximate “initial conditions”). We suppose that we can have good approximations (this is what we mean by a small noise rate, and this assumption allows error correction and computation to start with,) but (*) is an assumption about the nature of such approximations.)
Note:
1. (*) is just a specification, if we want to send our world plans for production we need more details how (*) comes about.
2. Maybe our present world already satisfies (*) BOTH for classical correlations and for quantum correlations. Can you think of a counter example? We may be able to examine (*) from the behavior of pairs of qubits in small quantum computers.
3. (*) will allow classic (even randomized) computation (because in this case we do not need the bits to be correlated so (*) does not impose any condition on the noise being correlated.) But (*) appears to be damaging
for quantum computation.
4. If (**) we have a noisy quantum computer with highly entangled qubits, and if (*) holds, then the nature of low-rate noise will be rather counter-intuitive. There will be a substantial probability of a large proportion of all qubits being hit. (Hmmm, you called this t-error, nice!) I do not know if such a pattern for the noise should be considered ridiculous. But if you do, note, that there are two “if”s here, and decide if you rather give up (**) or (*).
Returning to the main theme of this post: I agree that the issues of fault-tolerance are very similar for the classical and quantum case (in-spite of the recent Bacon-Lidar pact), I do not understand why this should be a source of optimism (or pessimism).
Without taking any position on the relative ease of classical-versus-quantum error correction, the “difficult-to-correct” postulate of Gil Kalai:
… is a very reasonable postulate (to an engineer). For example, suppose two qubits are dipole-coupled with a coupling C~1 (in natural units set by the clock interval of the computation).
To an engineer, though, no qubit coupling C ever takes exactly its design value of unity. Rather, such couplings are known only within error intervals. And furthermore, such couplings are generically subject to broad-spectrum time-dependent noise, originating e.g. within lattice vibrations of the device.
Doesn’t this generically introduce both systematic errors and stochastic errors into practical quantum computations, and aren’t these errors generically of precisely the form that Kalai’s post regards as hard to correct?
The one area where I will venture an opinion is this: stochastic coupling noise is Choi-equivalent to a continuous weak measurement process, wherein the quantity being measured is the localized qubit-dependent energy in the quantum computation.
In other words, for a quantum computer to operate successfully, it must necessarily be a device whose localized internal operations cannot be spied upon, not even by the physical noise mechanisms of the device. And this absolute requirement of “hidden operation” is one feature that makes quantum computation essentially different from classical computation.
This is why we quantum system engineers are increasingly studying cryptography!
Gil, I see, I misread your point (2). You have a noise model there were correlated noise only occurs between correlated (qu)bits. Oops.
In note 3 you say: 3. (*) will allow classic (even randomized) computation (because in this case we do not need the bits to be correlated so (*) does not impose any condition on the noise being correlated.) This confuses me. Why don’t we require classical correlated bits for classical computation? When I perform classical error correction (as our hard drives and transistors are doing for us) then aren’t the individual bits correlated? And what about during the classical computation? It seems to me that this will produce correlated bits along the computation.
As to optimism versus pessimism, I am an optimistic for many reasons. I am not an optimist when it comes to the philosophical question of creating computers which are perfectly fault-tolerant. But for creating fault-tolerant quantum (and classical) computers which can solve interesting problems which will make our lives better: for this I am an optimist. Further, I am focused on the main problems in the upcoming experiments. And they have nothing to do with crazy models of noise. Which is of course not an argument to say that such noise doesn’t exist: it’s just that right now the discussions of these noise models are so far removed from the real problems of the lab, that I don’t see a reason to be pessimist. Further, perhaps my experience with decoherence-free subspaces also shapes my optimism. Give me correlated noise and I am even happier. And I guess, on a fundamental level, I side with Einstein: the universe may be mysterious, but it is never malevolent 😉
John: the type of errors you are talking about are exactly the type of errors which fault-tolerant quantum computation is designed to deal with. Error which occur due to imprecision of couplings for the gates you are trying to correct are fine for fault-tolerant quantum computation, as long as they are not strong.
The noise Gil seems to be arguing for (note: my understanding, not Gil’s menaing!) causes correlated errors when the (qu)bits are corelated, not coupled.
Gee, I can’t even figure out how to insert paragraph breaks in these replies, so how can I figure out quantum error correction? Is there an XHTML code that is functionally similar
?
On hearing the words “solvable in principle”, a mathematician or physicist tends to focus mainly on the word “solvable”, while an engineer tends to hear “solvable in principle, but all-too-likely to be NP-hard to solve in practice”.
The “Mike and Ike” book provides good examples. In the book, qubit gates are designed to be error-correcting via a beautiful and rigorous formalism. But in deriving the threshold theorem for the overall computation, there seems to be an implicit assumption that whenever the deterministic part of a qubit coupling is turned off, the stochastic part of the coupling is turned off too.
The assumed noise turn-off is achievable in principle, but it is amazingly hard for engineers to achieve in practice. Even designs that physically shuttle qubits away from one another tend to create enduring long-range couplings, e.g., via thermal fluctuations in the conduction bands of the confining walls of a shared ion trap, or via optical fluctuations in a shared Fabry-Perot cavity.
IMHO, mathematicians and physicists are right to ignore these couplings, for the sake of doing beautiful math and physics, and engineers are equally right to fear them, for the sake of building hardware that works.
Note that this “noise turn off” problem is unique to quantum computers, in that classical computers have robust fan-out, such that any desired number of copies of any desired bit can be robustly cloned.
Dave,
1. To the best of my understanding (or just instincts) the issue of correlated bits does not enter classical computation. So my (*) does allow classical computation. (But I will be happy to think more about it off-line.)
(If there was a need to manipulate correlated bits (*) would have been as damaging, but there is no such need from a computational point of view.)
2. Well, I did not really present (unfortunately) a noise model, just an idea or a requirement for a noise model, where the noise (as you correctly put it) causes correlated errors when the qubits are correlated. Another way to look at it (avoiding the optimism/pessimism issue which seems rather irrelevant) is this: The threshold theorem (which at present is the only game in town) has spectacular consequences but we try to look at the simplest non-trivial consequence. Namely, that we can have two qubits in an entangled state (say, a cat) such that the noise for them is (almost) independent – Inspite the arbitrary noise we allow on gates. (This is an ingredient of fault tolerance which is not similar to classical fault tolerance and it indeed looks a bit counter-intuitive.)
This simple consequence of the threshold theorem seems to capture much of its power. So perhaps this is what we should try to attack (theoretically) or examine (empirically).
Yeah, my line breaks are all fudged up. Need to fix that.
. But in deriving the threshold theorem for the overall computation, there seems to be an implicit assumption that whenever the deterministic part of a qubit coupling is turned off, the stochastic part of the coupling is turned off too. Hm, I don’t think this is neglected. The stochastic part of the coupling, as you call it, is simply a quantum noise on a wire when the gate is turned off.
The assumed noise turn-off is achievable in principle, but it is amazingly hard for engineers to achieve in practice. Even designs that physically shuttle qubits away from one another tend to create enduring long-range couplings, e.g., via thermal fluctuations in the conduction bands of the confining walls of a shared ion trap, or via optical fluctuations in a shared Fabry-Perot cavity.
Like I said, the threshold theorem doesn’t depend on this, except in the strength of this problem, but like you said, it can be a major source of error. Certainly, it is a bigger problem in some implementations than others. For example, while the patch potentials believed to cause heating in ion traps are a source of error, they are curerntly not the source of error which is keeping ion traps above the fault-tolerant threshold. That error comes from the two-qubit (and to some extent one-qubit) gates.
Note that this “noise turn off†problem is unique to quantum computers, in that classical computers have robust fan-out, such that any desired number of copies of any desired bit can be robustly cloned.
I don’t believe this. I believe that we think there is no such problem because we are dealing with already physically error corrected systems. Certainly if you try to engineer a SINGLE spin, for example, to be a classical spin as a bit, it will have the same problems. We live in a world in which the ideas of error correction are sheilded from our eyes: our hard drives, our transistors, are all fault-tolerant (or at least fault-tolerant with a lifetime long enough for our purposes: that data on your hard drive will eventually die…time to backup!) due to the physics of the device.
That is very nicely put, IMHO! In fact, when we check the literature, our eyes themselves are very strongly error-corrected sensors, within which everal retinal molecule is strongly coupled to a thermal reservoir of vibrating water molecules, with the functional consequence being that the signals from a single retinal molecule can be robustly amplified and fanned-out to the visual cortex.
From this point of view, we humans evolved at “hot” temperatures precisely so we could reliably observe and interact with a classical world, within which both our ideas and our genes can reproduce Iand evolve). And yes, we humans do support embedded biological error correcting codes, which are found at every level of our metabolic function. In fact, from a combined physics/engineering/biology perspective, it is a very striking fact that our own internal error-correcting processes are sufficient to create a classical world.
It is fun to imagine an alternative “cold” ecology, in which life has evolved (somehow) in the context of strongly entangled, robustly error-corrected quantum processes. It is hard to see how concepts such as “individual”, “object”, “3D spatial localization”, and even “physical reality” could be well-defined in this quantum-entangled but still-Darwinian world. These cold quantum creatures would perceive hot classical creatures like us as volcanoes of ecology-destroying raw entropy. Hey, we’re the “bad guys” in this scenario!
“Worst … plot … concept … ever”, as Comic Book Guy might say.
Three remarks:
1. It is quite interesting that the sticky points that John sees from the engineering point of view are quite similar to the places I would seek the theoretic weakness of the quantum fault tolerance hypothesis. The threshold theorem asserts that qubit coupling can be “purified” via fault-tolerance. John find it hard to implement in practice and I question if this purification is not an artifact of too strong mathematical assumptions concerning the noise.
Another point of similarity is the advantage of the ability to clown and use majority that John mentioned. This allows error correction and classical computation to prevail in certain cases of high-rate noise or low-rate noise with massive (“crazy”) correlations that would fail quantum computing.
2. If we agree that the matter of noise is essentially a matter of appropriate approximations we can see if in cases of dependence data we can expect approximation up to independent (or almost independent) error. (I do not think we have in nature anything close to the massive (crazy?) forms of dependencies needed among qubits of a quantum computer, but we do have some examples we can examine.)
Take, for example, the weather. Suppose you wish to forecast the weather in 20 locations around Seattle. The weather in these places is clearly dependent. You can approximate the weather at time T by the weather at time T-t, which is better and better as t gets small. You can also monitor and use sophisticated physics and mathematics to get some better forecasts.
Do you think we can have a forecast so the error terms will be independent among cities? or close to be?
3. So this brings me to the nice error-correction examples that Dave claims we take too much for granted. Consider the following hypothetical dialogue between Dave and me:
Dave: I can do something very nice. In the process there is an arbitrary matrix B and my method is based on my ability (using some physics you will probably not understand) to approximate it by a matrix C so that E = B-C is small.
Gil: Sound good, cool.
Dave: And as it turns out, E would be(approximately) rank one matrix. (This is important and necessary.)
Gil: Sounds bad, forget it.
Dave: But why? We see such approximations all the time: in transistors and hard drives and many other natural and artificial cases that are simply shielded from your eyes.
Gil: Yes, but in ALL these cases you approximate a rank-one matrix to start with. I believe that you may be able to approximate a rank-one matrix up to a rank-one error. I do not believe that you will be able to approximate an arbitrary matrix up to a rank one matrix.
I will never look at rank one matrices the same 😉
Well, stretching perhaps the blog’s hospitality, here is a (finite) sequence of additional short comments on this interesting problem:
A. Dave’s analogy between classical and quantum fault tolerance.
1. The analogy between the classical and quantum case for error correction and fault tolerance is very useful. I also share the view that the underlying mathematics is very close.
2. Of course, the threshold theorem and various later approaches to fault tolerance give strong support to the hypothesis of fault-tolerant quantum computation (at present, this hypothesis is “the only play in town”) and to the feasibility of computationally superior quantum computers.
3. A useful dividing line to draw is between deterministic information and stochastic correlated information (both classic and quantum).
4. The prominent role (sometimes hidden) of artificial and natural forms of error-correction is, of course, something to have in mind, but I do not share the view that it gives us substantial additional reasons for optimism. (It does not give reasons for pessimism either.)
5. I am not aware of (and will be happy to learn about) cases (artificial or natural) where fault tolerance and error correction are used for stochastic correlated data, nor am I aware of natural forms of error correction which uses a method essentially different than “clown and use majority”.
6. I also am not aware of cases of a natural system with stochastic highly correlated elements which admits an approximation up to an “almost independent” error term. This is the kind of approximation required for fault tolerant quantum computation.
B. Correlated errors and spontaneous synchronization of errors.
7. A remark experts can skip: (Again it is better just to think about classical systems.) When you have a system with many correlated bits, an error in one bit will also effect the other bits. On an intuitive level this looks like an obstacle to error correction but it is not. Independent errors effecting a substantial fraction of –even highly correlated bits, can be handled. (This is, I believe, what Dave refers to as fighting correlations with correlations.) Correlated errors which allows, with substantial probabilities, errors that exceed the capacity of the error-corrector are problematic. The crux of the matter (all the rank-one matrix stuff) is whether independent errors on highly correlated bits is a possible or even a physically meaningful notion. (So perhaps, laymen concerns are correct after all.)
8. The postulate of correlated errors.
Correlated errors can reflect the correlation of the actual bits/qubits which is echoed by the noise. (Another possible explanation for (*) for quantum computers is that the ability of the physical device to create strong correlations needed for the computation may already cause it to be vulnerable to correlated noise regardless of the state of the computer.)
9. (We talk about pairs of bits/qubits, but let me remark, that while pairwise (almost ) independence appears necessary for fault tolerant quantum computatio(and is the first thing to challenge), for the threshold theorem, pairwise (almost) independence would not suffice. Similar conditions on triples/quadruples etc. are also needed.)
10. The error correlation hypothesis and T-errors.
An appealing strengthening of (*) is
Positive correlations for the event that the bits are hit for every pair of bits leads precisely to what Dave calls T-errors: There is a substantial probability for large portion of all bits to be hit.
For example, if the probability for a bit flip is 1/1000, but for every pair the probability that they will both be flipped is > 1/3000, then there is a substantial probability that more than 25% of the bits will be hit.
11. T-errors as spontaneous synchronization of errors.
Low rate T-errors amounts to the errors getting synchronized. Can we have a spontaneous synchronization of errors when the noise is expressed by a “local” coupled stochastic process? There is a substantial literature (from physics, mainly) suggesting that the answer is yes.
For our purposes we want more: We want to show that for coupled processes, such as the process describing the noise of highly correlated physical system (especially, a quantum computer), substantial amount of synchronization of errors will always take place.
12. Can coupled noise create interesting/harmful noise patterns?
Here is a related problem: Take a Zebra . Can the pattern of strips be the outcome of a locally defined stochastic process? This is a problem we used to discuss (me, Tali Tishby and others) in our student years, 30 years ago, and we returned to related questions occasionally since then. To make the connection concrete think about the direction from tail to head as time, and the cells in the horizontal lines as representing your qubits (white represent a defected qubit, say). Isn’t it a whole new way to think about a zebra?
13. Spontaneous synchronization and the concentration of measure phenomena.
A reason that I find T-errors/spontaneous synchronization an appealing possibility is that this is how a “random” random noise looks like. Actually talking about a random form of noise is easier in the quantum context. If you prescribe the noise rate and consider the noise as a random (say unitary) operator (with this noise rate) you will see a perfect form of synchronization for the errors, and this property will be violated with extremely low probability (when there are many qubits). Random unitary operators with a given noise rate appears to be unapproachable by any process of a “local” nature, but their statistical properties may hold for such stochastic processes describing the noise. The fact that perfect error synchronization is the “generic” form of noise suggests that stochastic processes describing the noise will approach this “generic” behavior unless they have a good reason (such as time-independence) not to.
14. The postulate of error synchronization.
In particular, for quantum computers:
Both (*) and (**) can be tested, in principle, for quantum computers with a small number of qubits (10-20). I agree with Dave that even this is well down the road, (but much before the superior complexity power of quantum computers will kick in). Maybe (*) and (**) can be tested on classical highly correlated systems (weather, the stock-market).
15. The idea that for the evolution of highly correlated systems, changes tends to be synchronized, and thus that we will witness rapid changes effecting large portion of the system (between long periods of relative calm) is appealing and may be related to various other thing (sharp threshold phenomena, or, closer to themes of the Pontiff – evolution, the evolution of scientific thoughts, etc. )
16. Clowning and taking majority:
suppose you have T-errors where a large proportion (99%) of bits are attacked in an unbiased way. (Namely, the probability to change a bit from 0 to 1 is the same as the probability from 1 to 0.) Noiseless bits can still be extracted. This extends to the quantum case but only with the same conclusion: noiseless bits can still prevail. But there is no quantum error-correction against such a noise.
17. But are (*) and (**) really true??
How would I know? These conjectures looks to me as expressing a rather concrete (although I would like to make them even more concrete) possible reason for why (superior) quantum computation may be infeasible (in principle!). The idea of (forced) spontaneous synchronization of errors definitely sounds a bit crazy. Maybe it is true though.
C. Going below log-depth will not be easy.
18. Let me repeat the comment that even for noisy reversible computation, where the mathematical results support the physics intuition that computation is not possible, at present, we cannot exclude log depth computation, (which allows together with noiseless classical computation to factor in polynomial time). So even assuming the pessimistic notions of noise, proving a reduction to classical computation will not be an easy matter.
D. Optimism, skepticism
And there were the matters of skepticism and optimism and even the virtues of the universe; well, all these are extremely interesting issues but their direct relevance to the factual questions at hand is not that strong. One comment is that we do not seem to have good ways to base interesting scientific research on skeptical approaches. (Maybe social scientists are somewhat better at that.)