(Update: Sean Barret points out that his comment when I first posted this was exactly the point I talk about in this post. Somehow my brain didn’t register this when I read his comment. Doh.)
Back from a workshop in Arlington, VA.
One of the most interesting events in this workshop was that Daniel Lidar talked (all to briefly) about his (along with Alicki and Zanardi’s) objections to the theory of fault-tolerant quantum computation. I’ve talked about this before here, where the resulting discussion in the comments was very interesting. At the workshop, Hideo Mabuchi brought up something about the paper which I had totally missed. In particular, the paper says that (almost all) constructions of fault-tolerant quantum computation are based upon three assumptions. The first of these is that the time to execute a gate times the Bohr frequency of the system should be on the order of unity. The second assumption is a constant supply of fresh ancillas. The third is that the error correlations decay exponentially in time and in space.
What Hideo pointed out was that this first assumption is actually too strong and is not assumed in the demonstrations/proofs of the theory of fault-tolerant quantum computation. The Bohr frequency of a system is the frequency which comes from the energy spacings of the system doing the quantum computing. The Bohr frequency is (usually) related to the upper limit on the speed of computation (see my post here), but is not the speed which is relevant for the theory of fault-tolerance. In fault-tolerance, one needs gates which are fast in comparison to the decoherence/error rate of your quantum system. Typically one works with gate speeds in implementations of quantum computers which are much slower than the Bohr frequency. For example, in the this implementation of a controlled-NOT gate in ion traps at NIST, the relevant Bohr frequency is in the gigahertz range, while the gate speeds are in the hundreds of kilohertz range. What is important for fault-tolerance is not that this first number, the Bohr frequency, is faster than your decoherence rates/error rates (which it is), but instead that the gate speed (roughly the Rabi frequency) is faster than your decoherence rates/error rates (which is also true.) In short, the first assumption used to question the theory of fault-tolerance doesn’t appear to me to be the right assumption.
So does this mean that we don’t need to worry about non-Markovian noise? I don’t think so. I think in many solid state implementations of quantum computers, there is a strong possibility of non-Markovian noise. But I don’t now see how the objection raised by Alicki, Lidar, and Zanardi applies to many of the quantum computing proposed systems. Quantifying the non-Markovian noise, if it exists, in different physical implementations is certainly an interesting problem, and an important task for the experimentalists (and something they are making great progess on, I might add.) Along these lines it is also important to note that there are now fault-tolerant constructions for non-Markovain noise models (Terhal and Burkard, quant-ph/0402104 and Aliferis, Gottesman, and Preskill, quant-ph/0504218.) Interestingly, these models postulate non-Markovian models which are extremely strong in the sense that the memory correlations are possibly infinitely long. However, it is likely that any non-Markovian noise in solid state systems isn’t of this severly adversarial form. So understanding how the “amount” of non-Markovian dynamics effects the threshold for fault-tolerance is an interesting question.
I just wanted to point out that I think this is basically the same point I made when this paper first came up for discussion:
http://dabacon.org/pontiff/?p=959#comment-20585
Essentially the gate speed can be the inverse of a rabi frequency, which can be much longer than the bath correllation time, but still short compared to the error rates. Hence we still have a markovian description for the errors, but can also have low error rates. If we turn off the driving field, the same markovian error model still holds, and we can now wait for the qubit to cool down, thus allowing ancilla preparation.
I just noticed this new comment on our paper (unfortunately, almost a month after it was posted). In fact in section V.B (p.5) of the paper we discuss the case of quasi-degenerate qubits and replace the Bohr frequency by the Rabi frequency. The same WCL-type objection applies, just with a different energy scale (the Rabi frequency squared divided by the detuning). The statement made in our “Assumption 1” which emphasizes the Bohr frequency was made in this (somewhat misleading) form for simplicity. In the next version (stay tuned!) we will discuss this issue in much more detail and fix the confusing statement of Assumption 1.
In short — I fully agree with Dave that “Typically one works with gate speeds in implementations of quantum computers which are much slower than the Bohr frequency.”, but this does not help in overcoming our objection, since the Rabi frequency sets a new energy scale for the WCL-type inconsistency.
I do take issue with Dave’s statement “In fault-tolerance, one needs gates which are fast in comparison to the decoherence/error rate of your quantum system”, in the following sense: a decoherence rate, in the sense of an exponential decay rate, is a *derived* quantity, that comes out of a *Markovian* treatment (e.g., decoherence rate is 1/T_2). Therefore one cannot a priori compare gate speed (and let that be the Rabi frequency if we must) to exponential decoherence rate when one tries to derive consistency conditions. Why? Because gate speed is an input to the theory while decoherence rate is an output (so it’s an apples to oranges comparison). In other words, one must first prove that a Markovian model is applicable before one can start talking about exponential decay rates. (On the other hand, if one has a good way to define a non-Markovian decoherence rate — it will be non-exponential — then there is no problem). I want to stress this point because one so often hears people comparing gate speed to (implicitly Markovian) decoherence rates, yet we showed in our paper that this is not always a properly defined notion. The Markovian limit is compatible only with adiabatic gates, with a timescale set by a relevant system energy scale (Bohr or Rabi). If you introduce fast gates you can’t talk about Markovian decoherence rates.
As for Sean Barrett’s follow-up comment: please see Robert Alicki’s reply to his comment in the original discussion.
I agree that one should not separate gate speed from decoherence: clearly if I try to run my computer too fast, all I will do is blow the system up. However, over wide ranges of parameters, I claim that this is a valid assumption.
I guess I have to wait to see your new draft, but I don’t see how your discussion of the degenerate case applies to the nondegenerate case.
If you want a preview of some of the details that will appear in our new version of the paper take a look at the IBM Fault Tolerance Workshop talks. Slides 15+16 of my talk explain in some technical detail the issue with the WCL.
If you want a preview of some of the details that will appear in our new version of the paper take a look at the IBM Fault Tolerance Workshop talks. Slides 15+16 of my talk explain in some technical detail the issue with the WCL.
%
My fellow on Orkut shared this link and I’m not dissapointed that I came here.