Oh, my brain is sore. Why, oh why do I get suckered into reading things like quant-ph/0610117? Now I could go on and on about this paper, but instead I thought I’d cut and past my favorite parts. The parts that didn’t make me want to send my head straight through my monitor. Mother always said if you can’t say anything nice about a paper, cut and paste the better parts and make funny statements about them. Call it laugh therapy, if you will.

Okay, let’s begin the therapy. This part is funny. It gives what I call an “argument by ignorance”:

On the other hand, the heavy machinery of the theoretical quantum computation with its specific terminology, lemmas, etc, is not readily accessible to most physicists, including myself.

So you want to criticize fault-tolerant quantum computation, but you readily admit that you do not understand it? Ha! That just cracks me up. Is this sort of like the arguments that “math is useless” because “I haven’t used math in years?” Oh, and having read the literature, I’m pretty sure that *even* a physicist should be able to understand it. I mean, I’m a physicist and I understand it…and I’m not even a string theorist.

Another good part of the paper is reference [19]:

The future quantum engineer is a mythical personage, who will finally achieve factorization of numbers like [tex]$10^{260}$[/tex].

Now, if I hadn’t just read the statement of ignorance which opens up this paper, I might assume that this is an ironic statement. But just having stated that things like “lemmas” are too complicated, I just can’t make that assumption. And when you say that factoring “requires” exponential time, I can’t assume that you know pretty much anything about computer science or discrete math. So I’d like to say that even today there are engineers who can factorize these numbers. Indeed [tex]$10^{260}=2^{260} times 5^{260}$[/tex].

Here is another good one:

Alicki [21] has made a mathematical analysis of the consequences of finite gate duration. I am not in a position to check his math, but I like his result: the fidelity exponentially decreases in time.

Sweet. That’s an argument by ignorance followed by an argument by “I like it!”

A Fox News bifecta!

Argh. #@%@!! Yep, that’s about all I can say.

> Nonetheless, if you know you donâ€™t have the knowledge, how can you even comment?

Wisest is he who knows he does not know – Plato.

Oh…My…God….

I will readily admit I have, once or twice, spit out a thoroughly naÃ¯ve and/or simplistic paper thanks to a lack of knowledge. I would be surprised if everyone hasn’t done it at least once. Nonetheless, if you

knowyou don’t have the knowledge, how can you even comment????????It’s like the idiocy that seems to regularly haunt the medical profession (see my extended rant on that subject). The attitude seems to be, in general, “if I don’t understand it, it must be wrong or worthless!”

There’s steam coming out of my ears…

Dave, I don’t think you’ve grasped the true substance of Dyakonov’s critique. 10^260 might be factorable into 2^260 * 5^260 in

theory— i.e., in some mathematician’s fantasyland — but what are its real,physicalfactors?I also read this paper. When I read this comments for DFS:

I know the author did nothing but

guesswhen preparing the paper.From the abstract: Based on a talk given at the Future Trends in Microelectronics workshop, Crete, June 2006

Clearly it was presented by a Cretin…

(sorry, couldn’t resist)

Okay, I can’t help chiming in with my own favorite quotation:

…we must be very careful and reluctant in accepting theorems… Whenever there is a complicated issue, whether in many-particle physics, climatology, or economics, one can be almost certain that no theorem will be applicable and/or relevant…Right. Off the top of my head, “You can’t communicate faster than light [across spacelike intervals].” Possibly “Computer programs can’t solve the Halting problem [or insert your favorite undecidable here].” Or “Perpetual motion machines are impossible.” Thank goodness for all the careful thinkers who, as they roam the streets of Berkeley, are reluctant to accept these theorems.

Now, the paper has a relevant caveat, as it refers to theorems “provided by mathematicians”, and most of the above was done by physicists. Whereas fault-tolerance was, of course, developed entirely by mathematicians. Like that John Preskill guy.

Returning to some seriousness, I read a Slashdot thread on spam the other day. Somebody posted a delightful fill-in-the-box form, which began “Your proposed solution to the spam problem will not work for the following basic reason:” after which there was a list of about 15 statements with check boxes. The statements were along the lines of “It violates the Constitution,” “It would require infinite computing power,” and “It would prevent all email.” The form had three or four similar sections, including “The crucial thing you have overlooked is…” and “This solution has previously been proposed by…”

I’d love to have such a boilerplate form for evaluating these periodic “critiques” of QC. It would be awfully satisfying to fill out, and might actually have some effect.

Speaking of Slashdot, when did the arxiv evolve into it?

I enjoyed some of the funny parts, too. But actually it is nice, for those of us who may have come to take the theory of quantum fault tolerance for granted, to be reminded of how truly remarkable and marvelous it is. This paper does not lay a glove on the theory. Even so, let’s be careful not to be too smug. We sure have a long way to go toward turning the theory into practice.

You should commend him for finding out about quantum fault-tolerant computations, because as of 2001 he seemed to be unaware of it. Not that this would stop him from commenting on quantum computing even then.

Well, you’ve got to admit that the abstract is attention-getting. The only reason it caught my eye is that Dyakonov (and I’m pretty sure it’s the same guy) is a fairly famous condensed matter physicist. He has a spin-relaxation mechanism named after him. I agree, upon reading the actual paper, that he seems waaaaaaaay out of line with his criticisms on this topic.

Dave,

Dyakonov has written another polemic against quantum computing at http://arxiv.org/abs/cond-mat/0110326

He tries to denigrate the field by claiming he is “frustrated with this state of affairs”…

PS. You gave a great show at QIPC’06 in London!!

I hate to be the party pooper, but I’d like to suggest a more serious approach to Dyakonov’s paper. First of all, a critique of fault tolerance is a healthy thing: http://dabacon.org/pontiff/?p=959, http://dabacon.org/pontiff/?p=1028, http://link.aps.org/abstract/PRA/v73/e052311 ;). But more to the point, Dyakonov is a well respected condensed-matter physicist (the Dyakonov-Perel mechanism of spin relaxation is named after him) and he is probably representative of a community of quantum computing sceptics (CQCS), most of whom donâ€™t have the time or interest to read the error correction literature in depth and hence must form opinions from a position of relative ignorance (donâ€™t we all about something). Moreover, senior people like Dyakonov and others in the CQCS are in a position to make decisions about funding, hiring and other not so minor matters… So rather than ridiculing him, how about we write a serious reply to his paper? The reply could be posted on quant-ph and cond-mat and would show the CQCS that we are capable of handling their criticism professionally. I’d be happy to see every single technical point in his paper addressed in detail and refuted. This could be done by people adding their comments to an open-source online document. I’m not sure what tools are available to write such a joint paper online, but suggest Dave as the Editor-in-Chief! (Upon checking with Dave, he has agreed to set up a wiki if there is enough interest.)

To start this off, here is a piece addressing his critique of decoherence-free subspaces.

â€œDyakonov writes: `Although it is not difficult to construct artificial models with some special symmetries, my guess is that in any real situation the Lindblad operators do not have common eigenstates at all. Obviously, the simplest way to fight noise is to suppose that at least something is ideal (noiseless). Unfortunately, this is not what happens in the real world.â€™

Although Dyakonov is right that the condition that all the Lindblad operators have common eigenstates is unlikely to be satisfied in real systems, he misses the point. Consider the case of point group symmetry in molecules. No real molecule at room temperature has perfect point group symmetry, since its nuclei are constantly in relative motion (i.e., the molecule is not a rigid rotor). Does this mean that the concept of symmetry applied to molecules is useless? Of course not, since it is an empirical fact that symmetry explains much of molecular spectroscopy. The reason is that for small relative nuclear displacements, point group symmetry is a good starting point for assignment of molecular spectra. Effects of non-rigidity modify the spectrum, but only in the limit of large relative nuclear displacement is point group symmetry lost and we must resort to other methods for assigning spectra.

The same principle holds for decoherence-free subspaces. A sound principle of quantum computer engineering is to build a device whose interaction with the environment has a high degree of symmetry. Of course in reality this symmetry will be broken, but if it is broken only perturbatively then the DFS concept will be useful. First, DFSs are robust with respect to symmetry-breaking perturbations [D. Bacon et al., Phys. Rev. A 60, 1944 (1999)]. Second, it is possible to use dynamical decoupling techniques to â€œfixâ€ a weakly broken symmetry, by re-symmetrizing the interaction [e.g., L.-A. Wu and D.A. Lidar, Phys. Rev. Lett. 88, 207902 (2002)]. Under these conditions of weak symmetry breaking it makes sense to use a hybrid strategy in which a DFS encoding is used at the lowest level, dynamical decoupling pulses are used to reduce as much as possible the symmetry breaking perturbations, and a top layer of fault-tolerant quantum error correction deals with all remaining errors. Also noteworthy is the fact that it is now well appreciated that the entire structure of quantum error correcting codes can be embedded into the mathematical structure of noiseless subsystems, which are a generalization of decoherence-free subspaces [e.g., E. Knill, Phys. Rev. A 74, 042301 (2006)].â€

Here is another quote which requires a response. On p.6, top of col.2 Dyakonov writes: â€œRelated to this, there is another persistent misunderstanding of quantum mechanics, which plagues the quantum error correction literature.â€ He then proceeds to claim that we donâ€™t understand the fact that measuring a qubit in the computational basis generally doesnâ€™t commute with applying a unitary operation to it. Perhaps someone else would like to take the initiative and explain why no such misunderstanding exists in the quantum error correction literature.

There are many more statements in the Dyakonov paper which amount to attacks that can easily be refuted, but I will leave it at this.

Perhaps we can agree with him on this point:

It is absolutely incredible, that by applying external fields, which cannot be calibrated perfectly, doing imperfect measurements, and using converging sequences of â€fault-tolerantâ€, but imperfect, gates from the universal set, one can continuously repair this wavefunction, protecting it from the random drift of its 10300 amplitudes, and moreover make these amplitudes change in a precise and regular manner needed for quantum computations. And all of this on a time scale greatly exceeding the typical relaxation time of a single qubit.

I posted some thoughts on Dyakonov’s preprint on Scott Aaronson’s blog. In agreement with some people (but unlike most people) I thought there was quite a bit of interesting physics and geometry implicit in Dyakonov’s (regrettably polemical) preprint.

Dave’s making much of John’s comment “Dyakonov’s paper does not lay a glove on the theory.” But this is just an appeal to authority (albeit a stupendously well-respected authority).

As I understand the point of Dyakonov’s article (assisted by Dan Lidar’s excellent review of decoherence free subspace) the contest is more like the game of wits between Westley and Vizzini the evil Sicilian in

The Princess Bride.And the opening arguments of

bothsides seem quite strong to me:Skeptic:Let’s test Shor-coded single-qubit error correction, in an independent Markovian noise model.Believer:Fine by me!Skeptic:Aha! Already the stipuolation “independent” has you in trouble. Because there’s no decoherence free subspace.Believer:No problem! For the Shor code, the expectation value of all 3×9=27 single-qubit Pauli operators vanishes identically. So the code subspaceisapproximately decoherence-free, to leading order in perturbation theory.Skeptic:(sounding exactly like Vizzini the evil Sicilian) You’d like to think that, wouldn’t you? But you’ve given everything away! You fell victim to one of the classic blunders! The most famous is never get involved in a land war in Asia, but only slightly less well-known is this:never compute Markovian noise matrix elements only to leading order — the Ito rules demand that second-order terms be kept too.—–

My own view … both parties should drink the iocane-doped wine of a detailed numerical calculation, accurate to second order, and see who falls over. Algebraic understanding can come later.

Dave, I’d be sincerely interested to know more about your simulations, since I plan to run similar ones over the holidays.

Which means, I’m already thinking about how to structure the code, what measures of performance to use, etc.

E.g., I’ve already decided use quantum trajectory simulations (the preferred idiom for engineering).

And in these simulations, the Shor code error correction will be applied iteratively.

Now, AFAICT, the error correction will not suffice to to keep the wave function from drifting and diffusing farther and farther out of the Shor codespace, in consequence of precisely those second-order Ito terms.

But hopefully, the error correction will slow down this out-of-codespace drift and diffusion … the key question then is, how MUCH does it slow it down? Being a weak algebraist, I hope to observe what the code does, then explain it

ex post facto.(All of the above is entirely aside from other key issues, that are connected with implementing projection operators as weak Kraus operators, while taking realistic account of Stark shifts and finite detector efficiencies.)

And finally, how the heck do I get good paragraph breaks on this blog !!??

And as an addendum to the the previous post, I tend to think of quantum simulation in geometric and cryptographic terms.

From this point of view, it natural (and rigorously general) to regard “Alice” as receiving information from the noise processs, which are realized as a covert measurement channels owned by Alice. Similarly, “Bob” is receiving information from the error correction measurements.

From this point of view, error correction is a contest between Alice and Bob. Bob is trying to keep the qubit confined to the Shor codespace, while Alice is trying to drag the wave function out of the Shor codespace. We can think of Bob as trying to dominate Alice via the quantum Zeno effect.

Bob’s main advantage is this: he is allowed to use feedback control to steer the wavefunction toward the codespace. So what Bob is doing is actually quite a bit stronger than the usual quantum Zeno effect.

Alice’s main advantage is this: there is no quantum Zeno effect for transitions induced by white noise (in consequence of those second-order Ito terms).

Speaking personally, it surely will be darn fun to watch Alice and Bob battling it out!

And then, as one final note, let me say what I presently think these trajectory simulations will show (but I want to do it, to make sure!).

In the first round, using ideal projective measurements, I expect that Bob’s feedback control will dominate Alice’s covert measurement; the “believer” will be validated; the “skeptic” will fall over with surprise; Dyanokov will be amazed that the orientation of the spin

isprotected from random drift (not perfectly, but amazingly well). This will be very satisfying for the “believers”.In the second round, using nonideal projective measurements, everyone will come to better appreciate some of the extraordinarily difficult engineering challenges, relating to (e.g.) finite detector efficiencies and Stark Shifts, that stand in the way of approximating ideal projective measurements well enough to do practical error correction. This will be very satisfying for the “skeptics”.

So both skeptics and believers will be happy? Duh, iff there’s a massive increase in jobs & funding for science, mathematics, and engineering in general! `Cuz that’s the only thing that makes everyone happy. This means we must ship products that work.

Thanks Dave! And don’t forget Charlene’s http://www.arxiv.org/abs/quant-ph/0302006 .

I’ve talked with Charlene a couple of times … she’s very smart!

If we could create a buncha jobs for talented young people like her, that would be great. But job-creation is even tougher & more challenging than quantum error correction, it seems to me.

Of course you keep second order terms to derive a Markovian master equation. This doesn’t mean that the important terms in such master equations aren’t independent errors.

We already have algebraic understanding and better yet this understanding is physical (at least for those of us who come from the physics side of the equation.) I’ve done master euqation computer simulations of error correction. What I learned was theory works. So I stopped doing that and went on to the real problem of actually figuring out how to engineer a quantum computer.

(Although note that the simulations I did were with Markovian master equations plus unitary evolution. What the critiques like Alicki et al would say is that the master equation I used was wrong. However I think one of the main points of contention coming out of that episode was that to a good approximation these equations would be correct (at least in some, but not all, physical system). Further, recent non-Markovian results would seem to cut even further holes into this argument.)

Related to this John, you must surely enjoy http://arxiv.org/abs/quant-ph/0110111 as well as http://arxiv.org/abs/quant-ph/0402017 and as well as Charlene Ahn’s thesis “Extending Quantum Error Correction: New Continuous Measurement Protocols and Improved Fault-Tolerant Overhead” (which I can’t find online right now.)

Err, John Sidles that is, not John Preskill who was Charlene’s advisor!

i think laugh therapy really works but people have to see that in ur website cause honestly u didnt make laugh therapyseem so important to people although its really good 😉

Dyakonov made tenure and can say whatever he wants.

in fact feeding his name into Google throws up an abstract for a 2002 critique against the composite fermion picture of the Fractional Quantum Hall effect, another `theory’ (– which resulted in Nobel prizes for the discoverers in that instance)

I think Daniel Lidar’s assessment is correct, the criticisms need to be answered professionally with an accessible review.