Thermodynamics is one the most important tools we have for understanding the behavior of large physical systems. However, it is very important to realize when thermodynamics is applicable and when it is not applicable. For example, try to apply thermodynamics to the Intel processor inside the laptop I am writing this entry on. Certainly the silicon crystal is in thermal equilbrium, but then how am I able to make this system compute: if states are occupied with probabilities proportional to a Boltzman factor, then how can my computer operate with all sorts of internal states corresponding to, say, it’s memory? Let’s say that all of these internal states, states of my computing machine, are all energetically about the same energy (which is, to a decent approximation, true.) Then, according to thermodynamics, each of these states should be occupied with the same probability. But the last time I checked, the sentence I am typing is not white noise (some of you may object, 😉 )

Today, Robert Alicki, Daniel Lidar, and Paolo Zanardi have posted a paper in which they question the threshold theorem for fault-tolerant quantum computation and claim that the normal assumptions for this theorem cannot be fullfilled by physical systems. I have a lot of objections to the objections in this paper, but let me focus on one line of dispute.

The main argument put forth in this paper is that if we want to have a quantum computer with Markovian noise as is assumed in many (but not all) of the threshold calculations, then this leads to the following contradictions. If the system has fast gates, as required by the theorem, then the bath must be hot and this contradicts the condition that we need cold ancillas. If, on the other hand, the bath is cold, then the gates can be shown to be necessarily slow in order to get a valid Markovian approximation. Both of these conditions come from standard derivations of Markovian dynamics. The authors make the bold claim:

These conclusions are unavoidable if one accepts thermodyanmics…We take here the reasonable position that fault tolerant [quantum computing] cannot be in violation of thermodynamics.

Pretty strong words, no?

Well, reading the first paragraph of this post, you must surely know what my objection to this argument is going to be. Thermodyanmics is a very touchy subject and cannot and should not be applied adhoc to physical systems.

So lets imagine running the above argument through a quantum computer operating fault-tolerantly. Let’s say we do have a hot environment. We also have our quantum system, which we want to make behave like a quantum computer. Also we have cold ancilla qubits. Now what do we do when we are performing quantum error correction? We bring the cold ancillas into contact with the quantum computer interact the two and throw away the cold ancillas. Now we can ask the question, is the combined state of the cold anicllas and the hot environment in thermal equilbrium? Well, yes, both are in thermal equibrium before we start this process, but they will be in thermal equilbrium with two different temperatures. OK, so now we have an interaction between the system and the cold ancillas. So let’s do this. Now these two systems, the quantum computer and the ancillas clearly couple to the hot bath. Therefore we can assume that the Markovian assumption holds and further that the gate speed for the combined system-ancilla system is fast. No problem there, right. OK, now we throw away the cold ancillas. So we’ve done a cycle of the quantum error correction without violating the conditions set forth by the authors. How did we do this?

We did this by being careful about what we called the “system.” (Or, more directly we have to be careful what we call the “bath.” But really these are symmetric, no?) We started out the cycle with the system being the quantum computer. Then we brought in the cold ancillas. Our system now includes both the quantum computer and the ancillas. Since we are now enacting operations on this combined system, our enviornment is the original bath, which is hot (which may now couple to the ancillas.) We can perform fast gates on this combined system and then we may discard the ancillas.

In order for the authors argument to work, they have to assume that the “system” is always just the quantum computer. But then clearly the assumption of the environment being in thermal equilibrium is violated at the beginning of the error correcting cycle: the ancillas are cold but the bath is hot. Both are independently in thermal equibrium, but the combined system is not in thermal equilbrium at the same temperature. The interactions with the hot bath do imply that we can perform fast gates. The interactions with the cold ancilla do imply that we will have slow gates. But when we bring the cold ancillas and quantum computer together, we can also have fast gates: because our system now consists of the computer plus the ancillas and the remaining environment it hot. The ancillas are not part of the thermal bath which is causes problems for our quantum computer. Certainly the authors are not objecting to the fact that we can prepare cold ancilla states? So I see no contradiction in this paper with the threshold theorem. (A further note is that there is also a threshold for fault-tolerance when the noise is non-Markovian. I’m still trying to parse what the authors have to say about these theorems. I’m not sure I understand their arguments yet.)

Thermodynamics is, basically, a method for reasoning about large collections of physical systems when certain assumptions are made about this system. Often we cannot make these assumptions. (A classical case of this, which is not relevant to our discussion, but which is interesting is the case of the thermodynamics of a system of many point particles interacting via gravity: here thermodynamics can fail spectacularly, and indeed, things like the internal energy of the system are no long extensive quantities!) In the above argument, we cannot talk about two systems being at the same temperature: we have two separate systems with different tempatures. Certainly if we bring them together and they interact, under certain conditions, the two will equilibriate. But this is explicitly what doesn’t happen in the fault-tolerant constructions. This is, indeed, exactly what we mean by cold ancillas!

Understanding the limits of the threshold for fault-tolerant quantum computation is one of the most interesting areas of quantum information science. I’ve bashed my head up against the theorem many times trying to find a hole in it. I think that this process, of attempting to poke holes in the theorem, is extremely valuable. Because even if the theorem still holds, what we learn by bashing our heads against it is well worth the effort.

Updated Update: Daniel Lidar, Robert Alicki and other have posted responses and comments below. I highly recommend that you read them if you found this entry interesting!

Thanks for this Dave. If you’ve got time, I’d be interested to read your comments on the recent and somewhat related Phys. Rev. Lett. 94, 230401 (2005): “An Intrinsic Limit to Quantum Coherence due to Spontaneous Symmetry Breaking”.

Prof. Lidar, I am confused by the logic of your paper. It seems that you present 2 master equations: M1, M2, and a set AAA of 3 minimal assumptions for error correction. You conclude that AAA and M1 contradict themselves, and so do AAA and M2. Then you say that this means that AAA by itself is inconsistent. How does this follow? All I see is that neither of your two models applies in a regime where AAA is possible. AAA might still be consistent if we operate the QC outside your two regimes, i.e., with fast gates and low temperature.

The problem can’t be made to go away just by redefining who is system. The argument is this: there is *one* Hamiltonian for the quantum computer (QC), ancillas (A), and bath (B). This Hamiltonian sets an energy scale and hence a temperature (T) scale. When QC, A, and B are all coupled, and we assume the Markovian singular coupling limit (SCL), this entire QC+A+B system must satisfy one thermal equilibrium (KMS) condition at high T. Hence the ancillas must be hot.

Now Dave will probably say: there is not one Hamiltonian initially, or rather not one bath initially. The A’s are coupled to a second B initially, at low T. Call that second bath B2. When the A’s are introduced and couple to the QC, they are cold, because they retain the memory of their interaction with B2. But the QC is operating in the Markovian (singular coupling) limit, hence is coupled to a hot bath, which we’ll call B1. Thus the A’s now become coupled to B1. What happens next is that there is a temperature gradient and the A’s start to heat up, i.e., equilibrate with B1 (just as the QC has already equilibrated with B1).

At this point you could make the following objection. This whole argument ignores the relevant timescales. Specifically, the A’s are used very rapidly for syndrome extraction and conditional operations (and whatever else fault tolerance might require), and then very quickly discarded. Thus they do not have time to equilibrate with B1 (which would happen over a “T_1” timescale, which we tend to think of as long (compared to T_2)). I.e., they remain cold during the time interval over which they are needed, and hence there is no problem.

I can accept this time-scales objection in principle. But the problem is that it is clear that *some* ancilla heating *will* occur. This is treated in our section on Impure Ancillas. Fault tolerance only tolerates a tiny amount of ancilla impurity (or equivalently, heating). Thus the message is that one must be much more careful about the physics of ancilla heating in the SCL, and the required rates of cooling. Can the relevant timescales be satisfied? Not clear from standard fault-tolerance arguments. E.g., the thermodynamic analysis of Nielsen & Chuang, pp.569-571, tells you that the entropy

decrease rate in a quantum computer must be at least balanced by entropy increase rate in the ancilla. This is fine, but it doesn’t tell you whether the resulting timescales are compatible with keeping the whole system Markovian in the SCL.

Would you really want to transport cold ancillas from outside to the inside? Probably not — transport is a non-equilibrium process that is likely to cause a lot of extra decoherence. Note in this context the discussion of the Shnirman and Schon paper in our Measurements section, who did a careful study of what happens when you have a measurement apparatus constantly coupled to the QC, and perturb it from equilibrium when you’re ready to measure. That’s one of the few references I know of where the time-scale issue was carefully analyzed.

A couple more general comments:

1. The comparison to classical computers doesn’t help us much, in my opinion. They are very different beasts, in particular in the manner that computational degrees of freedom are represented (which has implications for the thermodynamic disucssion). Undoubtedly one can draw inspiration from the classical case, such as in Dave’s beautiful new work on self-correcting memories, but beyond inspiration I would strongly hesitate to draw comparisons.

2. We don’t have a “no-go” theorem in our paper; we merely point out that it seems that the *present* theory of quantum fault tolerance is not free of internal inconsistencies.

3. Important: Dave’s objection to our objections focuses on the Singular Coupling Limit (SCL) discussion, which is actually the less physical of the two limits we discussed. The more interesting one is the Weak Coupling Limit (WCL), where there is no problem with ancillas, but it turns out that only slow (adiabatic) gates are allowed.

Thanks for the response, Daniel.

I believe you see my main point, and the jist of your argument about a second bath is getting at this. The main point is that the ancilla or the ancilla’s bath need to be constantly cooled. I see no problem with this. Certainly one must be conscious of this when designing a quantum computer: one really does need a good place to dump entropy and one needs to be able to do this on a fast timescale. Certainly this can be done in many of the proposals for quantum computers: for example cooling of ions is achieved by dumping the entropy into the electromagnetic field.

I think perhaps the debate here centers on what is meant by having cold ancillas. To me this does not mean I have some finite number of ancillas which I cool at the beginning of the computation and then use for quantum error correction. No the condition of having cold ancillas is that you have ancillas from which you can constantly cool. I point to experiment when I say that I do not believe there are fundamental difficulties in this process, but you are right that the rate of cooling needs to be high.

I also disagree that the classical case has nothing to do with the quantum case. I think this is a myth perpetuated by those who wish for decoherence to be the savior of the philosophical problems of quantum theory. Phase decoherence is different and important simply because in the rotating-wave approximation it is the process through which the energetics of the unperturbed system can be conserved. If you look at, for example, my work with Ken Brown and Birgitta Whaley on the ideas in supercoherent qubits you see that the role of quantum error correction (or error detection in this case) is specifically designed the eliminate this distinction. I believe that quantum error correction effectively does the same thing: it produces and effective Hamiltonian which to local independent couplings look like heating up of the system,i.e. that look exactly like bit flip errors. So I think the whole point of quantum error correction is that it turns the phase decoherence into exactly the type of noise a classical system must deal with.

As to the WCL model, I have no idea why I’d want to be in that limit (slow gates.) I’m happy working the SCL. But again in the weak coupling limit, the assumption is made that there is a bath and that it is in equilibrium. I do not believe that this will be the case for quantum error correction as argued in the SCL case.

What would be interesting is to repeat your analysis, but this time for different a bath which is constantly being cooled and therefore provides an entropy dump. It could be that there is still a problem (and certainly for certain bad physcal parameters you could get in trouble), but I don’t see these problems right now.

Reply to R. Tucci:

Actually there is only one master equation (the usual Davies-Lindblad one). There are two ways to derive it which are mathematically consistent: SCL and WCL (weak and singular coupling limits). There are three assumptions all of which are made in standard quantum fault tolerance theory: A1=fast gates, A2=cold (pure) ancillas, A3=Markovian approximation. We show:

A1+A3 incompatible with WCL, requires SCL, ==> high T ==> contradiction with A2.

A2+A3 incompatible with SCL, requires WCL, ==> slow gates ==> contradiction with A1.

Does that help?

Of course it is possible to consider dropping A3, which is indeed what was done in the recent work by Terhal & Burkard, and also (later in their paper) by Aliferis, Gottesman, and Preskill. We offer some comments on possible inconsistencies there in Section VI.D of our paper. (Basically there appears to still be a Markovian assumption made, and hence the previous criticism may apply.)

Reply to Dave’s second posting:

Dave writes: “I think perhaps the debate here centers on what is meant by having cold ancillas. To me this does not mean I have some finite number of ancillas which I cool at the beginning of the computation and then use for quantum error correction. No the condition of having cold ancillas is that you have ancillas from which you can constantly cool.”

I fully agree; in fact there is a paper by Aharonov et al., Limitations of Noisy Reversible Computation, quant-ph/9611028), which, as far as I recall, proves that a constant supply of *fresh + (nearly) pure* ancillas is *necessary* to avoid efficient classical simulation.

The problem is how you are going to get this supply of fresh+pure ancillas to couple to your quantum computer. There are (at least) two options: 1) you cool the ancillas and reuse them, 2) you discard the used ancillas and bring in new ones. In both cases you must introduce a second (cold) bath, therefore set up a temperature gradient, and thus dramatically increase all entropy production rates. Option 2) is particularly bad since (as I mentioned in the previous reply to Dave) physical transport is a sure way to increase decoherence rates. Anyway, we’re back to rates again, and it seems that we at least agree that one must be careful about studying the feasibility of fast cooling. (Again, we don’t have a no-go theorem in the paper.)

Dave further writes: “So I think the whole point of quantum error correction is that it turns the phase decoherence into exactly the type of noise a classical system must deal with.” Again I agree, but this presumes that QEC works!

Then Dave writes: “As to the WCL model, I have no idea why Iâ€™d want to be in that limit (slow gates.) Iâ€™m happy working the SCL.” I’m surprised you’d so easily dismiss the WCL 🙂 — that is the physical limit, of weak system-bath coupling. It’s exactly the regime where it makes sense to build a quantum computer. The WCL is furthermore consistent with thermodynamics at all T. The SCL is actually a mathematical formality: it *requires* taking the high-T limit, since only then is it consistent with thermodynamics. (Are we in agreement that consistency with thermodynamics is a necessary condition for a physically valid master equation? I have a feeling that maybe not, since you write: “But again in the weak coupling limit, the assumption is made that there is a bath and that it is in equilibrium.”)

Finally, Dave writes: “What would be interesting is to repeat your analysis, but this time for a different bath which is constantly being cooled and therefore provides an entropy dump”. I think this would indeed be interesting: you need two baths: a hot one for the QC (I assume you want to be in the Markovian SCL limit), and a cold one for the ancillas. Then the cold ancillas get coupled to the hot bath, and the rates show up…

I think we aren’t in agreement here. Thermodynamics is applicable only when a system is in thermodynamic equilibrium. There are all sorts of cases where this is not true. In regards to your paper, I would say that the KMS condition does not cover cases where you have baths with different temperatures, or more specifically where one of the baths is being used to cool the ancillas. Sure, if you bring multiple baths togther they (might, under the correct circumstates) equilibriate, but certainly this is a question of timescales.

Some points worth adding:

– Heating up of ancillas once they are introduced into the system is already explicitly included in threshold calculations. Ancillas, just like any other qubits, are subject to errors — this is the mechanism for thermalization. The assumption we need is that their error rate is low before you perform any gates on them (but it need not be lower than the error rate per time step, certainly, and it could actually be a few times that without a serious effect on the overall threshold).

– Cooling ancillas in a separate reservoir might not be nearly as difficult as Daniel is claiming. For instance, people are seriously talking about (and even doing experiments) moving ions around for ion trap QC. There will be some decoherence associated with that, certainly, but it need not be a deal-breaker. The distance you have to move them is a serious issue, of course. Also, it is sometimes possible to couple to separate environments without physically moving at all. Again in the ion trap: cooling works by dumping entropy into the vacuum surrounding the trap, whereas errors I think mostly come from interactions with the trap itself.

– The whole discussion would benefit from a reality check. We can’t do experiments yet to see whether fault-tolerance works or not, but it is possible to do experiments to study error models and the possibility of cooling. Among the systems which people consider as candidates for QC, are there any in which the errors are Markovian (to a reasonable approximation) but in which cooling is possible? My impression is that there are (say in the AMO arena), but I am not expert enough in any implementations to give a definitive answer.

Consistency with thermodynamics is required in the following sense: when a system is coupled to a heat bath at temperature T, but is otherwise left alone, it will eventually equilibrate with the heat bath at T, after a long time (many “T_1″‘s). Markovian master equations too should have this long-time property, in order to ensure their physical consistency. In this sense I can certainly not agree that “Thermodynamics is applicable only when a system is in thermodynamic equilibrium”.

[Note that decoherence-free subspaces and subsystems could be thought of as an exception: they are coupled to a bath but will never equilibrate. But then they’re not really coupled to a bath…]

1) General comments on “thermodynamics for computers”

I do not believe that thermodynamics is not applicable to classical and quantum

computers.

Classical computer is never in a thermal equilibrium state, its internal states

are metastable macro-states with enormous life-time ( say billions of years or

more). Any classical bit is a complicated system with many internal degrees of

freedom . The logical value of a bit is determined by a state of a certain

macroscopic collective variable and not by a state of these internal degrees of

freedom. The process of computation is fully described by the I and II-law of

thermodynamics.

Acting by external (electromagnetic) fields we perform work which is transformed

into heat

which must be removed. But this heat is produced in the internal degrees of

freedom of any

single bit because their relaxation processes are fast. The relaxation of the

collective degrees

of freedom which actually carry all the information is extremely slow and hence

the corresponding entropy production is for all practical purposes equal to zero.

In my opinion information is destroyed by the entropy production (which is always

positive)

processes. The entropy production processes make the distance between two

initial states always smaller and therefore any two states are less

distinguishable with time (but can have lower entropies!). If this distance

decreases exponentially then the corresponding decay time gives the limits for

the reliable computation time. In macroscopic classical computers this decay

time is determined by the mentioned above extremely long relaxation time-

practically infinite.

Quantum computer cannot work on macrostates because we do not have quantum

superpositions of them. Therefore relaxation times for quantum bits are short.

The idea of

FT-QEC is to suppress this relaxation (entropy production) processes by some

miraculous

(for me) tricks. I cannot imagine any physical mechanism which for a generic

system could

slow down drastically ENTROPY PRODUCTION. Entropy itself can be reduced but this

is irrelevant for the information contained in a state.

However, we do not have at the moment a â€œno-go theoremâ€ which could be generally

accepted and therefore we point out some physical inconsistencies in the standard

approach to FT.

Indeed the most tricky point in FT is the application of â€œcold ancillasâ€. I do

not see how â€œcoolingâ€ could increase information. It can decrease the entropy but

â€œcoolingâ€ is an irreversible process with positive entropy production after all.

2) Many baths, KMS and WCL.

Of course , one can apply WCL to a model with many bath , each of them are in KMS

(i.e. equilibrium) state at different temperatures(they are many papers on this topic). The result is a Master equation governed

by a sum of generators each corresponding to a given baths. The stationary state exists and of course

is not a thermal equilibrium state but describes some stationary currents flowing through the system.

2) Do we need 2 baths with different temperatures for computation?

No! Computer is not a heat engine. You need only one bath to remove

heat generated by gates which perform work on the system. On the contrary, to avoid entropy production we should

not allow temperature gradients and one bath is enough. “Cold ancillas” is a missleading name. One should speak about (almost) pure ancillas.

Temperature has meaning only if the Hamiltonian is defined. Any pure state can be arbitrarily close to a thermal state

at arbitrary temperature!

Well this is sort of off topic, but I’ve always found this backwards. No offense meant to those who have done the hard work of showing such conditions, but thermodynamics is NOT universally applicable. Certainly, any device with even a minimal amount of complexity is not well described as being in thermal (Boltzman) equilibrium. Evidence for this is overwhelming, and I would say most compelling in the absolute ubiquity of power laws for various distributions across all sort of different complex processes. If, for example, we assume that the cosmic ray background is in thermal equilibrium we get results which are off by tens to hundreds of orders of magnitudes.

Ack, I’m losing track of all the comments! Well let me comment briefly on Robert Alicki’s comment!

I see nothing different between here between classical and quantum error correction. The degrees of freedom in quantum error correcting are better described as “non-local” than “collective” but the effect of quantum error correction is the same as for classical systems, to make the relaxation of these non-local degrees of freedom slow (if they take billions of years to decohere I would be happy, but I’d be happy with a hundred years ;)) And I certainly disagree that the entropy production in my classical computer is low: one of the biggest obstacles in building todays computer is exactly the problem of keeping them cool (just ask IBM and Apple!)

Reply to Dave comment no.12

1)Imagine a classical bit as a small magnet which containes say 10^16

individual spins. Its logical value, say 0, is represented by an enormous number

of microstates which may differ from the state “all spins down” by reversing

a certain number of spins (on the average 10^8 spins are “up” ). The interaction

with the environment produce a very fast random walk on the space of microstates

but does not change the macrostate and hence the logical value. The larger is the classical

bit the more stable is with respect to noise.

For quantum bit the situation is opposite, the larger is the system the decoherence is faster.

Random reversal of a single spin in a magnet destroys quantum superposition of the

states “all spins up” + “all spins down”

2) I did not say that the entropy production in my notebook is low.

I said that entropy is almost entirely produced in the irrelevant degrees

of freedom. Simply, those small spins are heated but the net magnetization does not

change its sign.

3) Thermodynamics is not about equilibrium only. There exists non-equilibrium thermodynamics whic describes stars

and even the whole Universe.

Robert: I like you analogy with classical magnetic systems a lot. It’s one I use all the time! But I think that quantum error correction is no different: the difference is that the order parameter in the classical system is the total magnetization. In the quantum system, the relevant order parameter is different and is not so simple. I think you might enjoy the discussion of this in my recent paper on “self-correction”: quant-ph/0506023.

Dave writes: “thermodynamics is NOT universally applicable. Certainly, any device with even a minimal amount of complexity is not well described as being in thermal (Boltzman) equilibrium. Evidence for this is overwhelming, and I would say most compelling in the absolute ubiquity of power laws for various distributions across all sort of different complex processes.”

To quote Robert (above): “Thermodynamics is not about equilibrium only. There exists non-equilibrium thermodynamics”. Moreover, the discussion in our paper is *not* about thermal equilibrium. Of course, a computer in thermal equilibrium cannot compute. Indeed, a functioning computer is an example of a system exhibiting complexity (to use Dave’s term). But if that computer was left alone, coupled only to a heat bath, it would eventually equilibrate, wouldn’t it Dave?

Once one accepts the last premise, the consistency of Markovian dynamics with thermodynamics must follow, in the sense that it is used in our paper.

Well, I think we are getting a bit off topic (see Daniel Gottesman’s comment especially! Mostly because he sounds like the ghost of Feynman: nature as the arbitrator.) but still this is fun.

As for the role of thermodynamic equilbrium in your paper, I was under the impression that the reason you justify the two derivations is that they both use the KMS assumption. And the KMS assumption leads directly to the assumption of thermal equilibrium for the bath, doesn’t it? From your paper “the fundamental importance of the KMS condition is captured by the fact that it is necessary in order for thermodynamics to hold.”

Maybe. But I don’t think carrying out fault-tolerant quantum error correction exactly qualifies as leaving the computer alone! I say maybe, precisely because I think non-equilbrium thermodynamics has a hell of a lot to say about physics and I do not believe it is possible to demonstrate the validity of Boltzman thermodynamics for all physical systems. I would not be surprised if computers (both classical and quantum) turn out to be one of those systems which are not described by Boltzman thermodynamics.

Way off topic: Similar questions arise when you talk about whether life can be perpetuated indefinitely in a cooling universe. Does this mean that we must all die in a cold bath way down the line? I’m not sure anyone knows the answer to this (and last I heard people like Dyson kept flip flopping on what they thought was the correct answer to this connundrum.)

Of course these are my own prejudices. And remember I just finished a postdoc at the Santa Fe Institute whose mantra is “Boltzman isn’t always correct!”

Apologies if this is even further off-topic. I have a comment on

the paper and this seems like an appropriate place to post it

since there is already some discussion here.

In the weak coupling limit, the argument seems to be that one can

only derive a Markovian master equation in the case where the

Hamiltonian is constant over a timescale significantly longer than

tau_R (which is, if I have understood correctly, the correlation

time for the function F(t), Eq. (3)). Since tau_R ~ 1/kT, for a

low temperature environment we have omega_kl >> kT, i.e. omega_kl

>> 1/tau_R. So the argument goes: in the WCL at low temperatures, we

cannot implement fast gates by rapidly switching the Hamiltonian

on and then off for a period pi/omega_kl, whilst still modelling

the evolution with a MME.

Conversely, in the high temperature, “singular coupling” limit,

the Lindblad terms in the MME are independent of the form of the

system part of the Hamiltonian, and so one can model rapidly

switched gates, but not ancilla preparation.

However, this argument would seem to exclude the commonly

occurring case where there are two widely differing frequency

scales in the system Hamiltonian. This occurs, for example, in the

case of a two level atom driven by a laser. The energy gap between

the ground and excited states might correspond to an optical

frequency, of omega_eg ~ 10^15 /s. However the Rabi frequency for

driven oscillations may be much smaller than this, say omega_R ~

10^8 /s. If the energy gap is sufficiently large, omega_eg >>

1/tau_R, then one can make WCL type approximations by neglecting

rapidly oscillating terms like exp(i omega_eg t). If the Rabi

frequency is also sufficiently small, such that omega_R < < 1/tau_R, then I think that it's also possible to make SCL type approximations - in particular, when deriving the MME we can make approximations like G(omega_eg) = G(omega_eg + omega_R) = G(omega_eg - omega_R). (The reason this is valid is the same as Alicki et al give in their paper - G varies significantly only on a frequency scale 1/tau_R). I think that making these approximations leads to a MME with Lindblad terms that are independent of the value of omega_R, and which is also valid for a non-adiabatically varying omega_R. One frequently sees this in quantum optics textbooks - adding weak driving to a two level atom only modifies the Hamiltonian term of the MME and not the Lindblad term. In the absence of driving, the Lindblad terms drive the system to a thermal equilibrium state which can be very close to the ground state. I haven't seen this derived to the same level of mathematical rigour as is found in the Davies paper (Ref 38 of Alicki et al), but I expect that such a derivation is possible. I think a version of this argument is given e.g. in the textbook "Statistical Methods in Quantum Optics 1 : Master Equations and Fokker-Planck Equations," by Carmichael, and also a somewhat brief version is given in a recent paper I wrote (with Tom Stace) at cond-mat/0412270. The upshot is that one can model reasonably fast gates (on a timescale 1/Omega_R >> tau_R but still shorter than any

decoherence times) and also good ancilla preparation (by turning

the laser off and waiting for relaxation) within the same

Markovian master equation.

I think a similar argument holds for the case of laser cooling in

an ion trap that Daniel Gottesman mentions above – probably it is

possible to derive a Markovian description which allows both

fastish logic gates, and switchable laser coupling to the

high-lying states used for cooling.

Let me comment on Sean Barret comment:

First correction: Hamiltonian should be constant over a timescale much

longer then the inverse of the typical Bohr frequency. The notion of

\tau_R is not needed in WCL in contrast to the popular belief in the

literature on Markovian

approximations. One can find (for the same system atom + e.m. field) in

the (classic) literature at least 3 canditates for

\tau_R ~ 1/kT , 1/atomic frequency , 1/cut-off frequency.

> However, this argument would seem to exclude the commonly

> occurring case where there are two widely differing

> frequency

> scales in the system Hamiltonian. This occurs, for example,

> in the

> case of a two level atom driven by a laser.

This is a very good point ! (let’s use Carmichael text)

I would be very instructive to perform a WCL to the very end without

simply

omitting the “irrelevant terms” in eqs. (2.94a,b). Then (as Carmichael)

admits

the Linblad generator is not given by eq.(2.96) but will contain two

additional

Lindbladians OF THE SAME ORDER . The arguments around the eq.(2.95)

justifying

negligence of the “irrelevant terms” are not correct!

Why nobody uses this “right” generator?

Probably because in the experiment we observe these famous three

frequencies

and the derivation of the spectral lines witdths from the model of “atom

+ e.m. field” is anyway

irrelevant because Doppler and presure broadening are much more

important than the radiation damping.

So we can put almost anything for the dissipative part and tune the

decay rate to experiments.

Now the problem of Markovian approximation. The two additional

Linbladians obtained using

a proper WCL derivation are obtained after averaging over times longer

the 1/Rabbi frequency

and this contadicts “fast gates” assumption.Hence for fast gates the additional

Linbladians are replaces by some non-Markowian terms.

>

Reply to Dave’s last comment, with many comments from correspondence with Robert Alicki inserted:

(Dave, there must be a way to number these comments automatically and

make referencing easier…)

I actually think we’re right on topic. We’re trying to clarify why one should care about the thermodynamic consistency of Markovian master

equations.

The “KMS assumption” is in fact not an assumption. It is a result, which must be satisfied by all quantum systems at thermal equilibrium (for a derivation see the Alicki & Lendi book, pp.90-91).

Now before you say “there he goes again with thermal equilibrium, and a QC is not at thermal equilibrium”, let me stress again that the point is that a quantum computer (of the type considered in standard fault tolerance theory) left to its own devices, coupled to a heat bath, *will* reach thermal equilibrium. In this context the statement

“But I donâ€™t think carrying out fault-tolerant quantum error correction exactly qualifies as leaving the computer alone” points to a possible misunderstanding: the thermodynamic KMS condition is a requirement of consistency *in the absence* of intervention (i.e., QC left alone in the presence of heat bath). Namely, a Markovian master

equation without external controls is subject to the KMS condition if one wants it to correctly describe the return to equilibrium of a system coupled to a heat bath.

But of course, one is interested also in the case with a driving Hamiltonian present. In the SCL the situation is very simple: *no matter what the driving Hamiltonian is, (except physically

impossible infinite-strength ones), in the SCL the system relaxes to a maximally mixed state*.

More specifically, to obtain the SCL we need a (singular) bath which satisfies the KMS condition at infinite T. The thus obtained semigroup can be combined with an arbitrary time-dependent driving Hamiltonian for the system S. Then the state of S relaxes to a maximally mixed state. This relaxation, in the SCL, is generically exponential. According to the (unproven) Lemma 7 in the Aharonov et al. paper quant-ph/9611028),

such exponential decay kills the quantum computational speedup.

These considerations support the conclusion presented in our paper, that one cannot use the SCL for the whole system = register + ancillas, if one would like to perform fault tolerant quantum error correction (FTQEC).

What does this mean? Not that FTQEC is impossible. Just that it seems to be impossible in the SCL version of the Markovian master equation. (But that knocks down one of the two possibilities for mathematically consistent derivations of the master equation.)

What is the timescale for this exponential relaxation? If we take a dissipative Lindblad generator only, obtained from the individual coupling of any qubit to an independent (part of the) bath, then such a generator is a sum of individual single-qubit generators. The spectral gap (such a generator is self-adjoint in

Hilbert-Schmidt space) of the sum is equal to the spectral gap of the 1-qubit generator. So the relaxation time does not depend on the number of qubits in the system. Adding an arbitrary Hamiltonian makes the spectral analysis complicated but probably feasible. The Aharonov

et al. paper (quant-ph/9611028) uses the relative entropy (with respect to a maximally mixed state) as a kind of Lyapunov function to show this exponential decay. (But unfortunately the crucial Lemma 7 is not proven and no references are given.)

What about the (physically more interesting and relevant) WCL? KMS is a condition imposed on the heat bath R, which is assumed to be a large system such that in the weak coupling regime all manipulations performed on our “small system” S

do essentially not change the state of R . This is a consistent assumption because one of the properies of KMS states is their stability with respect to local perturbations.

If the Hamiltonian of S is constant then S relaxes to a Gibbs (KMS) state with the same temperature as R (and detailed balance is automatically satisfied). If not (the case of a driving Hamiltonian), then generally we have a rather complicated non-Markovian evolution, except in the adiabatic case when S tries to approach the temporal Gibbs state \sim \exp (-H_S(t)/T).

Reply to Dave’s last comment, with many comments from correspondence with Robert Alicki inserted:

(Dave, there must be a way to number these comments automatically and

make referencing easier…)

I actually think we’re right on topic. We’re trying to clarify why one should care about the thermodynamic consistency of Markovian master

equations.

The “KMS assumption” is in fact not an assumption. It is a result, which must be satisfied by all quantum systems at thermal equilibrium (for a derivation see the Alicki & Lendi book, pp.90-91).

Now before you say “there he goes again with thermal equilibrium, and a QC is not at thermal equilibrium”, let me stress again that the point is that a quantum computer (of the type considered in standard fault tolerance theory) left to its own devices, coupled to a heat bath, *will* reach thermal equilibrium. In this context the statement

“But I donâ€™t think carrying out fault-tolerant quantum error correction exactly qualifies as leaving the computer alone” points to a possible misunderstanding: the thermodynamic KMS condition is a requirement of consistency

in the absenceof intervention (i.e., QC left alone in the presence of heat bath). Namely, a Markovian master equation without external controls is subject to the KMS condition if one wants it to correctly describe the return to equilibrium of a system coupled to a heat bath.But of course, one is interested also in the case with a driving Hamiltonian present. In the SCL the situation is very simple:

no matter what the driving Hamiltonian is, (except physically.impossible infinite-strength ones), in the SCL the system relaxes to a maximally mixed state

More specifically, to obtain the SCL we need a (singular) bath which satisfies the KMS condition at infinite T. The thus obtained semigroup can be combined with an arbitrary time-dependent driving Hamiltonian for the system S. Then the state of S relaxes to a maximally mixed state. This relaxation, in the SCL, is generically exponential. According to the (unproven) Lemma 7 in the Aharonov et al. paper quant-ph/9611028),

such exponential decay kills the quantum computational speedup.

These considerations support the conclusion presented in our paper, that one cannot use the SCL for the whole system = register + ancillas, if one would like to perform fault tolerant quantum error correction (FTQEC).

What does this mean? Not that FTQEC is impossible. Just that it seems to be impossible in the SCL version of the Markovian master equation. (But that knocks down one of the two possibilities for mathematically consistent derivations of the master equation.)

What is the timescale for this exponential relaxation? If we take a dissipative Lindblad generator only, obtained from the individual coupling of any qubit to an independent (part of the) bath, then such a generator is a sum of individual single-qubit generators. The spectral gap (such a generator is self-adjoint in

Hilbert-Schmidt space) of the sum is equal to the spectral gap of the 1-qubit generator. So the relaxation time does not depend on the number of qubits in the system. Adding an arbitrary Hamiltonian makes the spectral analysis complicated but probably feasible. The Aharonov

et al. paper (quant-ph/9611028) uses the relative entropy (with respect to a maximally mixed state) as a kind of Lyapunov function to show this exponential decay. (But unfortunately the crucial Lemma 7 is not proven and no references are given.)

What about the (physically more interesting and relevant) WCL? KMS is a condition imposed on the heat bath R, which is assumed to be a large system such that in the weak coupling regime all manipulations performed on our “small system” S

do essentially not change the state of R . This is a consistent assumption because one of the properies of KMS states is their stability with respect to local perturbations.

If the Hamiltonian of S is constant then S relaxes to a Gibbs (KMS) state with the same temperature as R (and detailed balance is automatically satisfied). If not (the case of a driving Hamiltonian), then generally we have a rather complicated non-Markovian evolution, except in the adiabatic case when S tries to approach the temporal Gibbs state \sim \exp (-H_S(t)/T).

Daniel: well I wish I had threaded comments. Maybe I need to look into whether I can get my blog software thread the comments.

Well there is much that I don’t understand. First of all you now seem to be making statements about the system, not the bath, being in thermal equilibrium.

Again

I don’t understand why this must be true. It seems to me you are assuming badness and then showing that when conditions under which badness happens are enforced, you get badness!

But I do think this is perhaps missing the main point of your paper.

Perhaps here is something interesting. If you consider the ancillas as part of the bath, then because in the fault tolerant quantum computing constructions, one uses the ancillas in a very non-Markovian manner, then, of course, one needs to discuss non-Markovian processes. Whether one needs to consider non-Markovian noise for the coupling of the ancillas to the environment and the coupling of the system to the environment is a separate question. I’m pretty confident (because of experiment, not because of theory) that we CAN cool systems down. Once we’ve cooled them down, we now consider them part of the quantum computer and bring them into the comptuer. Now we can have a hot bath, perform fast gates between the computer and ancillas, and we are happy (I guess I’m just back to my original argument.) Notice! however, that if we had looked at this process as noise just from the “computer” perspective, this whole noise process will look non-Markovian! So my argument is consistent with yours, it’s just that I consider the non-Markovian coupling of the computer to the ancilla to be nothing surprising and not essential in the Markovian fault-tolerant construction where the noise on the ancillas when they are being cooled, the computer alone, and the computer combined with ancillas will be working in a Markovian limit.

Dave:

Here is an intersting quotation, pointed out to me from a concerned reader. It is from “Laser Cooling and Trapping” by Metcaff and van Straten. In the quotation the authors point out that laser cooled ions quickly reach a “stationary state [that] is not an equilibrium state and thus there is no thermodynamically definable temperature.”

It looks like the quantum computer you guys dedicated piles of talks and comments had problems with water supply to cool it during computations. Also, one cannot predict the age and entropy production of somebody’s grand mother based on the temperature of her right foot in the middle of summer.

What a horrible thing to say Bor.

BTW, there is a new version of this paper which is out and I still think it has problems. I’ll be posting something about this in the next week or so, so I hope you look at the post and don’t go away believing that quantum computers won’t work (and I’d especially hope you don’t just listen to me but go and read the paper and understand what the critiques are…they don’t involve my or your grandma at all 😉 )