A Quantum Bogosity Updated

One of the coauthors on the paper which I claimed was shoddy has written a comment in the original post. Which merits more commenting! But why comment in the comment section when you can write a whole blog post replying!

The paper in question is 0804.3076, and the commenter is George Viamontes:

Dave, this is a complete mis-characterization of the paper. Before I start the rebuttal, I’ll add the disclaimer that I am the second author of the paper, and would be more than happy to clarify this work to anyone.

Sweet! Continuing:

We are absolutely well aware of the threshold theorem and we even cite one of Preskill’s post-1996 papers that directly discusses the threshold result of Knill and Laflamme.

Aware, yes, but did you read Knill and Laflamme and Zurek and Aharonov and Kitaev and Preskill and Gottesman and Alifers and Reichardt and Shor and DiVincenzo, and all of the other papers which construct the theorem? The paper clearly disregards how these results work. But putting that aside:

The problem with the threshold theorem for QECCs is that it only applies to errors that are *orthogonal* to the code basis. This is the whole point of all of the QECCs I’ve seen based on codewords, and it is explained quite clearly in Preskill’s paper.

Of course QECC does not fix perfectly errors all errors. This isn’t the point of error correction. The point of error correction is that you can reduce the error rate in a way that scales exponentially better as you use larger and larger codes. This is also true for classical codes of course: if I independently error a three bit redundancy code with a probability p bit flip, even given perfect encoding and decoding, there is a probability p cubed error which acts on the logical code space (worse there is a p square uncorrectable error). The whole point of error correction is that you can use good codes, or concatenate, and if your original error rate is p for a bare qubit or bit, then it is possible to make the failure rate some q raised to the nth power, where n is the number of qubits. If you use concatenation this is easy to see: by encoding twice, for example, your error rate goes not from p to p squared, but from p to p to the fourth power. So for the errors which concern you, the point is that if you concatenate your seven qubit code, or use better codes, and your overrotation is below a certain threshold, the error correction failure rate will be exponentially suppressed in the number of qubits.

It seems that you didn’t read the first three formulas at the top of page 4 of our paper or the description of the error model we use on pages 11 and 12.

Actually I did, BTW

We model gate imprecision, which adds a random epsilon of rotation error to each gate (in lay terms you add a small random epsilon error value to the sin/cos parameters of the matrix representation of the operators). Forgetting simulations for the moment, what happens with the gate imprecision model is that two types of error are created, and only one of these is corrected by a QECC. Indeed, some non-zero amplitudes that are associated with invalid codeword states will manifest when applying an imprecise gate. These errors will be caught by a QECC since this portion of the state is orthogonal to the codeword basis. However, another error occurs which over- or under-rotates the part of the state associated with valid codewords. This is exactly what is shown in the first three formulas on page 4 of our paper. These errors will not be detected by a QECC, and these are precisely the errors we are talking about.

You seem to think that the point of QECC is perfect correction. This is obviously impossible (classically as well as quantumly.) The entire point of error correction is to reduce the rate of errors in a manner which scales nicely. See above. This is a basic and essential point of nearly all ideas for fault-tolerant quantum computing. Indeed it is a basic idea of error correction, period.

For example, suppose we say that |L0> and |L1> are our logical codewords for 0 and 1 (let’s say they are 7-qubit CSS code registers initialized appropriately). Further suppose we start off with a valid error free state a|L0> + b|L1>. Now suppose we transversally apply a logical X gate to this state, well then we’d expect to get a|L1> + b|L0>. However, if there is gate imprecision error in each of the physical X operators applied transversally (exact precision is physically impossible), then you will get something like c|L1> + d|L0> + f|E>, where |E> is an invalid codeword basis vector. Certainly f|E> will be eliminated by a QECC if the magnitude of that error is large enough. However, even after correcting this error, the amplitudes for the |L1> and |L0> portions of the state will be slightly off from a and b because part of the action of an imprecise gate is to under- or over-rotate those basis vectors.

Of course. The real question is how this uncorrectable error scales as you do something which nearly everyone does, like concatenate codes, or use codes designed to correct more errors.

In other words, in the case of a logical X operator, this is a lot like applying “too much” or “too little” legitimate X rotation, which essentially amounts to a circuit that is fault-free according to a QECC but functionally slightly different from the real fault-free circuit with no under- or over-rotation. So yes, part of the error is eliminated, but not the part that affects the valid codeword basis vectors.

Again, no argument there. But your missing the point of error correction.

To address your point about our simulations, the diagrams you are referring to are merely there to show how the codewords are functionally created and how error correction is done. As noted in the text, we merely point out that regardless of how you want implement the codeword initialization or syndrome measurement, all the physical operators involved will be imprecise.

Of course. But why do you need to point this out? This has been a stable of people studying quantum computing for over a decade now. The line in your paper which you say that decoherence has been the main concern of the community is, in my opinion, a really really really misinformed statement.

More importantly, at the bottom of page 12, as we try to explain in the text it’s really the transversal X gates we apply that carry the gate imprecision error. As X gates are transversal, these are being applied fault-tolerantly, albeit with operator imprecision on each X operator. We’ve run the simulation both ways assuming a clean initialization of a logical 0 and even clean error correction machinery and assuming faulty versions of each using the simple functional circuit which indeed does not incorporate extra fault-tolerance. I think you are assuming we only did the latter, but both versions exhibit the same linear trend in the probability of error increasing, where in both cases the probability of error is calculated by directly comparing to the amplitudes of fault-free operation.

Again it is not surprising to find these errors. But your paper makes absolutely no mention of not using the circuits you constructed. And again, this makes no difference to the fact that you completely ignore the point of how quantum error correction is used in the threshold theorem.

Indeed, we can be more clear about the fact that even with fault-free state initialization and fault-free syndrome measurement/recovery, the trend is the same. This is just a first draft posted on arXiv after all, not a camera copy.

Fine, but do realize that posting on the arXiv for many is a defacto publication for many.

The results of merely using faulty transversal X gates and assuming everything else is error free are particularly telling because it seems to show that the under- and over-rotations on the valid codeword states do accumulate, and they are spread across all the physical qubits since they are not orthogonal to the codewords. This coupled with the fact that in *all* cases we provide pristine ancillary 0 qubits for each syndrome measurement shows that this experiment clearly gives the benefit of the doubt to the QECC being simulated.

Again, of course even if quantum error correction is done correctly it can never perfectly perform error correction. The point of error correction is that it can lower error rates. This is a fundamental, and basic, observation that you have totally overlooked.

This work is also merely a first step in that we intend to simulate other circuits and other QECCs with many different parameters. The problem is that this subset of the errors not caught by a QECC seem difficult to quantify, which spurred us to propose simulation to analyze this problem.

Well now here you are completely ignoring the fact that these errors have been considered. Indeed, there are even beautiful papers where rigorous provable threshold follow for handling these errors (see for example Aliferis, Gottesman, and Preskill quant-ph/0703264 or the thesis of Aliferis quant-ph/0703230 for the latest and greatest.) I do know that these papers are not the most accessible papers, but even a cursory reading of them would inform you that they are considering exactly the type of error which you wrongly claim doom quantum computing.

In every case we expect the non-orthogonal errors to accumulate just as they do in this simulation even when assuming error-free QECC machinery. We are definitely open to constructive feedback about this topic, and we posted this paper with the intention of getting such feedback.

However the claim that this work is bogus in any way is clearly ridiculous, and no offense intended, but it seems to have stemmed from not reading through the draft very closely.

Dude, I’m tearing apart your paper on a blog. Of course I’m not offended. You’re the one who has every right to be offended. And indeed my first reading saw the errors you were considering (which have been previously considered), your circuit diagrams, (which are not fault-tolerant and are highly misleading if you didn’t actually use them), and your conclusion (that logical operations with over rotations still cause errors: of course they do, the point is how error correction reduces this as the number of qubits used in your error correcting code), and knew immediately that you were missing the point of what had been previously done. I did focus on your not using fault-tolerant circuits, because that was the most obvious thing you did wrong, but really you’re also missing a very basic point of how the entire edifice of fault-tolerant quantum computation works. The fact that you state right up front things like “In fact quantum error correction is only able to reliably suppress environmental and operator imprecision errors that affect only one or a few physical qubits in a Quantum error-correcting code (QECC) code block” tells me that you haven’t gone carefully through the literature on fault-tolerance.

We certainly should clarify those diagrams and the fact that we don’t introduce errors into the QECC machinery in the actual simulations, since that makes it clear that we are not hiding errors in some way to sabotage the CSS scheme. We are working on a subsequent version to clarify this part of the discussion, but the ultimate results are the same.

While it is fine to say “this is just a preprint” who cares, you have to realize that posting things to the arxiv does have an affect beyond just putting your idea out to others. In particular, as I noted originally, papers like this immediately get picked up by those who haven’t even done the work you’ve done and learned quantum error correction. I mean, if I were going to write a paper saying an entire subject area that well over a thousand people have been working on for over a decade, I might, before I post the paper to the arxiv, run it by a few people who know the field. Me, I’m grumpy and old and mean, but the result of the quantum computing field is full of really nice people, like Preskill, DiVincenzo, Shor, Knill, Laflamme, Zurek, Gottesman, Aliferis, etc who I’m sure would give you great feedback.

26 Replies to “A Quantum Bogosity Updated”

  1. Well, at least these guys go after overrotation errors. Something that at least is not dealt with in the original QECCs. I wish they could do the same analysis on an actual FT circuit to get some more interesting results. One of the few things that I can still remember from undergrad PHYSICS LAB, is that systematic errors are tougher to deal with.
    Now to refute myself:
    People *have* addressed over-rotations or systematic errors although not as much as stochastic errors: [1] you can use composite pulses to reduce it in a low level way and [2] check out: “Resilient Quantum Computation” by Knill, Laflamme, and Zurek which is possibly a solution to another observation: “Quantum Computation with Phase Drift Errors”, by Miguel, Paz, and Zurek, who actually perform a somewhat similar analysis to the paper we are dealing with here. I never liked the idea but somehow it makes sense to randomize systematic errors manually into stochastic ones [twirling is it called?]. Very interesting problems, IMHO.
    I think what’s wrong with the paper here is neither ignorance [which we are all blessed with it infinitely] nor mistakes but its attitude.

  2. Dave, a couple of points… Let me try to just clarify what the simulation section is supposed to be saying before moving on to the issue of better codes. It’s not that we are not simulating the circuits in the diagrams. We are simulating them. We use them in one case with operator imprecision on every component, and indeed we are not simulating concatenated codes there, just a single level 7-qubit CSS code. We also however run the circuits with no operator imprecision in these components, so we effectively give the code the benefit of the doubt and assume it’s error correction machinery is fault-free. In both cases we apply repeated transversal X gates with operator imprecision followed by error correction. The error correction part will either be faulty or fault-free depending on the simulation being run. The error accumulates with the same trend for both simulations. That’s all this section is getting at.
    Regarding better codes, we definitely want to simulate larger codes, and codes that also incorporate concatenation, and clearly we understand that codes are not perfect. The extent to which they are not perfect given practical limits on the technology is essentially the point of the paper. Indeed, I’m not trying to argue that larger codes don’t mitigate this problem. Clearly more components of an imprecise gate applied to a state will be likely to become orthogonal to the codewords as the codewords get larger.
    Given small enough imprecision, the theorem definitely says there is a point at which so many qubits have been added to some particular code that the imprecision of a single application of transversal X gates, for instance, will be mitigated. In any event, a simple 7-qubit CSS code for this circuit does not seem to be enough with no concatenation, even assuming error-free detection/correction machinery.
    What specific code would you suggest trying with the goal in mind of running Shor’s algorithm to factor 128-bit numbers? Roughly how many qubits do you think would be required and how many levels of concatenation, assuming say operator imprecision error is present?
    The point you make, and the basic point made by theorists is to add more qubits and operators. We’re taking a look at this from the perspective of what real devices would need to be able to do to meet these theoretical requirements. In particular, we believe there are fundamental physical limits to how far you can go with this, and this is the ultimate point of the work being developed.

  3. What specific code would you suggest trying with the goal in mind of running Shor’s algorithm to factor 128-bit numbers? Roughly how many qubits do you think would be required and how many levels of concatenation, assuming say operator imprecision error is present?
    Well for a 128-bit number I would use my computer, BTW.
    http://arxiv.org/abs/0711.1556
    http://arxiv.org/abs/quant-ph/0604070
    http://arxiv.org/abs/quant-ph/0509051
    The extent to which they are not perfect given practical limits on the technology is essentially the point of the pap
    I find this hard to believe considering there isn’t even one reference to current technology in the paper.
    In any event, a simple 7-qubit CSS code for this circuit does not seem to be enough with no concatenation, even assuming error-free detection/correction machinery.
    If you are below the threshold for error correction for this code, then you should at least see how this helps. You don’t make any effort to even determine where this number is.
    We’re taking a look at this from the perspective of what real devices would need to be able to do to meet these theoretical requirements.
    See above references. How can you say this with a straight face when you have no reference to current technology? I don’t get it.
    n particular, we believe there are fundamental physical limits to how far you can go with this, and this is the ultimate point of the work being developed.
    Hey dude, of course there might be a fundamental limit. But where in this paper do you find such a limit?

  4. There is a reference to current technology due to the choice for the specific range of operator imprecision error. We are also compiling data for smaller ranges such as +-1e-6 and +-1e-7. References obviously exist to these ranges in current technology, and it’s definitely a good idea to add them explicitly to justify the choice of rotation error range. The limits on current technology I’m referring to are the limits on operator precision.
    I have seen the first paper by Cross et al. and they do discuss an array of various codes to try. As noted in our conclusions trying many other latest and greatest codes is the next step. In terms of the specific simulation, whether or not some n-bit number for small n can be handled easily classically is neither here nor there since the point is to simulate how well a quantum computer will fare.
    We can argue all day about this paper being a work in progress, but it’s a waste of time since we clearly don’t agree on this point. I’ll post again down the road when we’ve simulated some of the other codes and posted the next version of the draft.

  5. Blah, blah, blah, SCIENCE! Blah, blah, blah, MATH! Blah, blah, blah, YO MAMA!
    This has been a condensation of every scholarly debate in the history of science.

  6. About the code words themselves changing: this is something that will happen in an ion-trap quantum computer if the phase-reference laser slowly drifts. But that doesn’t matter in that case as the information is retrieved with that same drifting laser anyway. In other words, the information is not in the basis (which slowly changes), but in the quantum state as written in that slowly changing basis.

  7. Well Anonymous, you may be right. But in science its often fairly clear cut,as in this case, as Dave is right as rain.
    Preprint is also not just some metafizicalmumbling. It’s a paper you expect others to read and comment on.
    So wrong science, lame writing, offer excuses.
    Fizix isn’t philosophy, where all scholarly work smacks of the Monty Python argument skit. This here boils down to raising a number less than one to a large power and getting a smaller number. It’s pretty simple really.

  8. “…the quantum computing field is full of really nice people, like Preskill, DiVincenzo, Shor, Knill, Laflamme, Zurek, Gottesman, Aliferis, etc who I’m sure would give you great feedback.”
    Definitely a good bunch of folks but getting feedback from people is not as easy as it appears. Or perhaps it’s just me…

  9. I read the paper as saying that you can factor the error into a component orthogonal to valid codewords plus a non-orthogonal component. The error-correction and fault-tolerance circuits reduce the former, but cannot help at all with the latter. Concatenating codes doesn’t help because the codes aren’t doing anything for the non-orthogonal component in the first place.
    IOW, it’s not just that QECC don’t perfectly fix all errors; it’s that there’s a linear component to the error which they don’t fix at all, and so will inevitably wreak havok as the circuit scales.
    (Now, is this actually true? I have no idea. I know nothing about quantum computing and post only to show off my ignorance. But it doesn’t look like nonsense at least.)

  10. http://arxiv.org/abs/quant-ph/0604070
    Operation \Time μs now(future)\ Failure Rate now(future)
    Single Gate\ 1 (1)\ 10^{-4} (10^{-8})
    Double Gate\ 10 (10)\ 0.03 (10^{7})
    Measure \200 (10)\ 0.01 (10^{-{8})
    Movement \20 (10)\ 0.005 (5×10^{-8})/μm
    Split \200 (0.1)\
    Cooling \200 (0.1)\
    Memory time\ 10 to 100 sec\
    Trap Size \ 200 (1−5) μm \”
    Here are prety optimistic results of 10^{-7} – 10^{-8} error and measurment error rate expectations. I doubt that even classicaly can be achieved initialization and measurment of classical analog device at such small error rate. I would say, measurment can’t be done better than 10^{-5} (even for analog classical device/computer).
    I wonder what is Dwave qubit measurment error rate? According to my calculation if measurment rate of one qubit is 0.95 then 1000 qubits measurment will be correct with probability 0.95^1000=1/10^23 and at 1GHz it would take on average milion years to find out what ‘quantum computer’ compute. Or if is the case measurment accuracy 0.01 at present time, then Dwave can’t have more qubits than 5000, becouse 0.99^5000=1/10^22 and it would require 100 thousund years on 1GHz ‘quantum computer’ to get correct measurment (on 1GHz it would require 1 year for 4000 qubits).

  11. Let us add a few comments to this discussion:
    If the noise model is stochastic over- and under-rotations (that is, not systematic errors), one can represent such noise as a superoperator which is a mixture of unitary transformations. From such representation one could extract a discrete set of Kraus operators for example, and typically these Kraus operators would not be proportional to simple Pauli operators.
    However, one can do a rigorous fault-tolerance analysis for noisy superoperators just as for Pauli noise, see e.g. the long paper by Aharonov and Ben-Or or papers by Knill, Laflamme, and Zurek. One can expand the superoperator into a sum of an ‘error-free’ superoperator (which acts as the ideal operation on the system) and an error operator, which contains anything else. Provided that the norm of the error operator is small enough, i.e. below some threshold, error-correction helps in protecting a quantum system.
    The literature on fault-tolerance focuses largely on simple Pauli noise models, since typically only for those models can one numerically simulate the effect of coding, or provide threshold estimates. There is certainly room for research which connects rigorous fault-tolerance analyses (such as the Aharonov/Ben-Or work or Aliferis/Gottesman/Preskill) to actual errors expected and seen in the experiments.

  12. Dave, George, you guys both have valid points, but I think the meat of the paper is getting lost in the controversy. Dave, you’re correct that the simulations here are not using state-of-the art gadgets, and that leads to incorrect conclusions about the large scale implications of the simulations. But the simulations themselves seem valid; they do what they say they do: Examine explicitly the effect on a simplistic QECC of errors of a certain character.
    I think this approach in itself is novel, in that most of the simulations of QEC to date have been based on application of (uniform) random Pauli operators at some rate. These two noise models are not equivalent, so the results from Cross (among others) do not directly translate. Moreover, errors from the real world are more commonly described in terms of small deviations in rotations than in terms of random application of Pauli matrices.
    So while the Viamontes paper is inaccurate in its conclusions, I think it’s an interesting new approach, and one that should be extended to more robust QECC techniques to see how they perform under this noise model (as opposed to the canonical Aliferis/Gottesman/Preskill one). In any case, it seems a little unkind to call it “bogus”!

  13. I don’t know Richard. It seems to me that claiming that the error model has never been considered (as Barbara mentioned, even the original threshold theorems covered this form of noise) is pretty close to what I would call wrong. Its just not true to say that this error model hasn’t been considered before. Even if they hadn’t said this, not citing the vast literature where such models are considered is pretty high on my list of bad things to do.
    If the entire point of the paper was to perform simulations of an error model, and discuss them within the context of what is known (and here I mean known known: i.e. there are rigorous theorems that have been proven) then fine. But that’s not what this paper claims at all. From the paper, “quantum error correction has no effect on operator precision errors.” Bogus, wrong, whatever you’d like to call it….
    Finally I’ll quote the conclusion “The fact that Shor’s algorithm is not more efficient than classical algorithms removes the only strong evidence for the superior computational power of quantum computers relative to classical computers.” Not a bogus claim, Richard?
    (I also like how they quote tensor network simulation literature without the corresponding references to the fact that contracting general tensor product networks in #P-complete. If the paper didn’t so blatantly have an ax to grind (quantum computers analog computers…please that is so 1995), I might not be such a big ugly jerk.)

  14. Dave et al.,
    I think the fundamental misunderstanding by the authors is this:
    Transversal application of gates over-rotated by an amount epsilon does not result in the logical state being over-rotated by epsilon!
    I think that many theorists living and breathing QEC every day fail to realize how difficult it is to grasp this fact, especially for someone coming from a classical computer systems background (which is my background and the background of both of the authors, I believe).
    When I began working on quantum, I read a couple of dozen papers on QEC and fault tolerance, as well as chunks of Mike & Ike, and still was troubled by this. It is not addressed directly in language that a computer engineer can follow in the most prominent QEC/FT literature (including, with apologies, that of some of the readers/commenters here @the Quantum Pontiff). Several years ago, at Yoshi Yamamoto’s quantum summer school, Ike came and gave some lectures on QEC/FT. I stuck my hand up and asked about it, and still didn’t understand his explanation, which, if memory serves, included the phrase, “rotation by an angle other than pi is outside the Clifford group”. Yeah, so? What’s the impact of that? (I’m sure Ike and most of the students were thinking, “Where did they find this moron?”)
    After that lecture, tired of not understanding it, I worked through by hand the state vector for rotating all of the qubits in the code word by (pi+epsilon), and finally saw (if I’ve done the math right) that it results in a change to the logical state of (pi+epsilon^2).
    Ah! Enlightenment! It does work!
    Moral of the story: Moments after being satisfied about noise, engineers will ask about systematic errors, and physicists are not very good yet at explaining it in terms that we understand. We’re simply not mathematically prepared to follow it, and come away feeling that our concerns are being brushed aside without being addressed. Patience on both sides is required, as well as faith in the good intentions and general intelligence of your correspondent.
    Dave, you (and some of the others here) have been very patient and helpful with me one-on-one on a number of occasions. Keep it up.

  15. Dave, I didn’t mean to claim that error models like the one in this paper haven’t been *considered* — obviously, they have, in all the ways Barbara and Panos refer to. On the other hand (quoting Barbara&Panos) “the literature on fault-tolerance focuses largely on simple Pauli noise models, since typically only for those models can one numerically simulate the effect of coding”.
    As far as I know, the reason for the privilege granted to the Pauli group in this role is due to the predominant use of “Gottesman-Knill” techniques for simulation. So, for me at least, that’s the interesting part of this paper: The application of an alternate simulation technique to examine a less thoroughly simulated error model.
    I agree that there are bogus claims in the paper, but as I read it, they mostly arise from analyses that don’t incorporate all that is known (known known). The claims that are getting attention in the blogosphere are incorrect (but correctly reasoned from the authors’ assumptions). So I don’t fault the authors for false or deceptive reasoning (unless the claim is that they intentionally left out references), just of making too-big claims while being ignorant of the state of the art. If I were reviewing this paper for publication, I would recommend against publication until more current technologies were incorporated, but that’s what the arXiv is for, getting less-than-mature things out there for people to see.
    Maybe I’m just a “silver-lining” type, but I think the simulations in this paper are interesting, regardless of the faulty analysis. Dave, you’re right to denounce the faulty conclusions in the paper; I just wanted to point out that there seems to be some good in it as well.

  16. Dave,
    Putting aside the rhetorics, such a careful reading of a paper is a great service mainly to the authors. I would have loved such a careful reading of a paper of mine.
    Coming back to the rhetorics, you cannot imagine how devastating it is, to so many people, when you refer to yourself as “old”. 🙂

  17. Yes, theorists sometimes are overly optimistic in their early estimates of decoherence rates. On the other hand, the history of science suggests that the theoretical limits to relaxation *can* be approached … after a lot of work.
    For electron spin-lattice relaxation in solids, the above evolutionary process took roughly 40 years (1932-72), as documented by Orbach and Stapleton in a 1972 review article (BibTeX summary appended).
    The same elements are present today in quantum computing … except that today’s noise models are less specific (because no one knows what the hardware will be) … and the experiments to test these models are more difficult (regardless of what the hardware will be). Shall we say … another 100 years to work it all out?
    In the meantime, Orbach-Stapleton review is recommended as a case-study that respects both points of view.
    ———
    @inCollection{Orbach:72, author = {R. Orbach and H. J. Stapleton}, title = {Electron Spin-Lattice Relaxation}, booktitle = {Electron Paramagnetic Resonance}, pages = {121–216}, editor = {S. Geschwind}, publisher = {Plenum Press}, city = {New York-London}, year = 1972, jasnote = {This article give a humility-inducing history of our understanding of spin-lattice relaxation, in which (a) theorists predict low relaxation rates, (b) experimentalists measure much higher relaxation rates, following which (c) theorists improve their theories, (d) experimentalists clean up their experiments, and (e) the cycle begins anew. Theorist: Waller Experiment: Gorter Theorist: Heitler, Teller, and Fierz, Kramers, Kronig, Van Vleck, Dyson, etc, 1932-1972. 146 references. } }

  18. As a follow-up to the Orbach-Stapleton–and mainly for fun–here is a strategy for thinking about this problem that triangulates the tough issue of “who’s right” by suggesting that everyone is right.
    Let’s begin by assuming (for the moment) that linear quantum mechanics is wrong … and this post will assume that everyone is happy with that as a starting point. 🙂
    Specifically, let’s assume that linear quantum mechanics is wrong in a way that precludes that any attempt to carry through a nontrivial quantum computation (and so evades the no-go argument that “nonlinear quantum mechanics allows NP-complete computation).
    We adopt an Ashtekar-Schilling model in which the *true* state-space of the universe is a Kahler manifold having some non-trivial (curved) geometry … we’ll imagine that this manifold is specified by M-theory (or something).
    For definiteness, we specify a universe of (say) 1000 electrons and protons whose state-space has (say) 1000 dimensions per particle. In other words, we specify that the dimensionality of physical state-space is proportional to some fixed power of the number of particles, rather than being exponential in the number of particles.
    And the quantum laws of this universe are simple: they are simply the ordinary postulates of linear quantum mechanics projected onto this nonlinear state-space. So linear quantum mechanics is locally true, but not globally true (and it is straightforward to check that all of the usual theorems that preclude non-causal communication still hold true).
    Needless to say, there is plenty of room for quantum phenomena to occur in this universe. For example, the state-space is large enough to encompass all of the calculations of ab initio quantum chemistry, and all of the calculations of matrix product states in solid state physics, etc. So chemistry, biology, and solid state physics still work — even quantum cryptography still works.
    In this model universe, as long as we consider only low-order quantum correlations, the predictions of linear quantum mechanics are satisfied to very high accuracy. For example, in quantum chemistry it is commonplace to compute ground-state energies that are accurate to one part in 10^-3 or better.
    And of course, if we build and test simple quantum gates, all of the ideas of fault-tolerant QIT similarly work just fine … the approximation of linear quantum mechanics is excellent.
    But of course, in our 1000-particle model universe we have “only” 10^6 (local) basis vectors to work with, and so past a certain level of entanglement, our quantum circuits simply won’t work, no matter how ingenious our error-correcting mechanisms are.
    Now wuppose that we are reluctant to abandon linear quantum mechanics–as is natural–and so we wish to ascribe the failure of our quantum circuits to a noise mechanism that is *not* important for simple circuits, but *is* important in complicated circuits. Then Section 2.1 of Panos Aliferis’s (well-written) thesis tells us exactly what kind of noise mechanism we would have to postulate to explain our data, namely “Faults that act collectively on many qubits.”
    In this hypothetical world, it would be very difficult to experimentally distinguish between the effects of collective noise mechanisms on the one hand, and fundamental nonlinearity of the quantum state-space on the other hand. In fact, just about the only experiments capable of doing so would be … quantum computing experiments! 🙂
    That is why I am personally a huge fan of quantum computing research. Because for the stakeholders of mathematicians, information theorists, physicists, chemists, and engineers, the situation is win-win-win-win-win.
    If quantum mechanics is linear, then we are destined to learn a lot about collective noise mechanisms. Good.
    If on the other hand, quantum mechanical state-space is nonlinear, then we have the very enjoyable prospect of generalizing quantum mechanics to curved state-space, along lines that are mathematically similar to the generalization of special relativity to general relativity. Which was lots of fun for both mathematicians and physicists.
    And even if the quantum state-space of the physical universe is exactly linear (which to me seems implausible), there are still two very good reasons for studying nonlinear quantum mechanics: the mathematical beauty of Kahlerian algebras and geometries, and their practical utility in simulating large-scale noisy systems. Which is good for mathematics and good for engineering.
    The bottom line … it seems to me that we are all of us in paradise … theorists, experimentalists, mathematicians, and engineers … and that’s why it’s the Golden Age of quantum information theory.
    There. Hopefully, everyone is happy! 😉

  19. I am not 100% sure, but I calculate that Deutsch problem can be effiecently solved with probabilistic random computer… Then you don’t need to insert values acording to some rules 000, 001, 010, etc, but to generate randomly those “101” as input for classical computer. Then if function is Balanced (f(101), f(100) = 0 or 1 etc…) then half can be output 0s and half 1s so on the average need about 2-4 runs and if it give you not all (4) zeros or not all ones, then function is balanced and if after ok, max 10-1000 repeats (for large nunmber of bits) it will give you still all 0s or all 1s, then function is constant. Ofcourse sometimes it can give you even for balanced function such number of only 1s or 0s, but it probability is about 1/2^n, but to compare with number of balanced functions 2^{C_{2^{n-1}}^{2^n}} to need evaluate this is almost nuthing, where C is http://en.wikipedia.org/wiki/Binomial_coefficient . But I am not sure… I know that can be only to constant functions (0000…000 and 1111….1111) and about balanced I doubt about that “C”, maybe balanced also two (00001111, 11110000 etc)? Anyway I just want to say, that still need to check very big number of function if you want to get very big or many times speedup and on practice you almost can’t have speedup with Deutch quantum algorithm to compare with classical algorithm (etc to simulate both with Desktop PC “quantum computer simulator”)…

  20. I wonder how much there is functions (balanced), which need to querie more than say 1000 times, when say n=100 bits? I can bet that such functions (with n bits) would be no more than, I guess, about 1/2^n from 2^{C_{2^{n-1}}^{2^n}} functions (C is http://upload.wikimedia.org/math/f/d/c/fdc2dbb37b95bd0dc0fd287e5e6639ba.png ). So I predict that probability of that you will “meet” function, which you will querie 2^{n-1}+1 times is 1/2^{2*n}, I may be wrong… But still probability that you will meet function, which need to querie more than 1000 times I think is somthing 1/2^n or maybe even much smaller (but can be bigger, but also I think it would be smaller than 1/10^20 and note this is for n=100 bits like I said; for more bits to meet such more than 1000 queries needing estimate function probability will be even much more smaller than 1/10^100 (for say ~500 bits)) like somthing (2^n)/(2^{C_{2^{n-1}}^{2^n}})…
    Also do there is quantum error correction for Deutsch-Jozsa algorithm or Foult Tolerance or somthing? And like we know Error Correction can be used only for limited number of qubits (and with unlimited computation time(?)). So “qubits also can fail” if qubits would be more than say 100 milions… So in practice (for reasonable time and resonable number of functions, which is possible to querie per this time) quantum computation don’t have andvantages in Deutsch-Jozsa problem (over classical/random computation)! And quantum computer also don’t have advantages in Deutsch problem (with 2 (one x qubit) qubits), becouse it using fairly more gates than classical computer (and those gates also may be added from/for QEC/FT). This is the same as with Grover’s algorithm, where if your gates are not very perfect (and I don’t believe that they can’t be so there accurate like wants theoretics…), then you will spent very much computation time on QEC/FT and your cirquit will be very big (and I think Quantum “elementary” elements of cirquit are “by defoult” resonably/much bigger than classical ones (transistors)) and thus will be unable to compute on very high frenquency (speed of electricity is limited…) and classical computer wouldn’t be able to control himself and separate quantum computer very fast… Thus quantum computer will work on about 1 kHz. So according to grover algorithm, classical computer with not very much gates (about milion) would be able to do milion search steps and quantum computer with 20 qubits about 1000 grover iterations, thus quantum computer will be about 1000 times faster. With about 100 classical processors or with maybe even less you will outperform this super 20 qubits Foult Tolerant quantum computer, which working at 1000Hz frenquency. Maybe even with one novaday processor…and with same number of gates (which for QC need many for FT…)… This can be true also for Shor algorithm if qubits accuracy not very high… Only quantum simulation would be advantage of QC, but now it won’t to do even that/this…

  21. I am confused, becouse calculate that foult tolerant Deutsch-Jozsa algorithm on quantum computer will faster fail than on probabilistic computer. After 1000 queries fail probability is

  22. “Debunker, of course the Deutsch-Jozsa problem can be solved efficiently if you allow error.”
    I can say the same for quantum computer: of course the Deutsch-Jozsa problem can be solved efficiently if you allow error from Foult Tolerant with high probability. You can say, but you can repeat Quantum computer several times to achieve very small error, but I will say, that Foult Tolerant cirquit is about 1000 times more complex and thus you with probabilistic computer can do those 1000 times, when quantum computer have done onle one time… So Deutsch-Jozsa problem ISN’T solved faster on quantum computer (than on probabilistic and with same fail probability)!
    If no induction, etc Deutsch problem with one (two) qubits, when I can’t understand such algorithm…
    Read a little bit Simon algorithm and seem that need more than milion qubit to achieve reasonable speedup, but on FT you may spend more qubits than speedup will be and it’s limit… I just though, that Deutsch-Jozsa problem is exponentionaly faster on quantum computer than on classical, but seems, that it isn’t at all faster… Very interesting, why then Deutsch ideas was acepted as giving advantages… Grover algorithm would be the best to understand for me, but it can’t be understanded with intelligence… Puting say |001> and finding it |001>… I wonder how need to search shortest colision traveling distance…
    ____________________________________________________
    Deutsch/Deutsch-Jozsa algorithm don’t giving speedup – debunked.
    ____________________________________________________

Leave a Reply

Your email address will not be published. Required fields are marked *