The latest round of the debate between Aram and Gil Kalai is now up over at Goedel’s Lost Letter.
I realize that I’m preaching to the choir at this venue, but I thought I would highlight Aram’s response. He nicely sums up the conundrum for quantum skeptics with a biblical allusion:
…Gil and other modern skeptics need a theory of physics which is compatible with existing quantum mechanics, existing observations of noise processes, existing classical computers, and potential future reductions of noise to sub-threshold (but still constant) rates, all while ruling out large-scale quantum computers. Such a ropy camel-shaped theory would have difficulty in passing through the needle of mathematical consistency.
One of the things that puzzles me about quantum skeptics is that they always seem to claim that there is an in principle reason why large-scale quantum computation is impossible. I just completely fail to see how they infer this from existing lines of evidence. If the skeptics would alter their argument slightly, I might have some more sympathy. For example, why don’t we power everything with fusion by now? Is it because there is a fundamental principle that prevents us from harnessing fusion power? No! It just turned out to be much harder than anyone thought to generate a self-sustaining controlled fusion reaction. If skeptics were arguing that quantum computers are the new fusion reactors, then I can at least understand how they might think that in spite of continuing experimental progress. But in principle impossible? It seems like an awfully narrow needle to thread.
Your comment is like saying that gravity is impossible because it is hard to see how quantum gravity would pass through the needle of mathematical consistency. People are always making weird consistency arguments about quantum mechanics. If your take those people seriously, then there is no reality. But you take them just seriously enough to believe in quantum computers? I say that quantum computers are impossible.
Let us consider the following principles:
(1) textbook quantum mechanics is valid at low energies
(2) quantum computers cannot be built, even in principle
(3) gravity exists
QC skeptics seek to reconcile (1) and (2). The field of quantum gravity seeks to reconcile (1) and (3).
In my opinion, the empirical evidence for (3) is stronger than the evidence for (2).
It’s not hard to see how to make quantum gravity work. It’s hard to come with theories that are testable and different from our current theories that hold in their relevant domains. Similar for the best ideas about why quantum computers won’t work. I’ve got many of them in my back pocket, and I’m just a lowly software engineer.
Sadly many of the arguments people make are either 1) incompatible with experiments or 2) silly arguments about major misunderstandings of quantum theory. These latter two are worth ignoring. The former is worth examining, just like theories of quantum gravity, precisely when they make testable predictions. Skeptics would do well to make explicit their testible predictions AND why they are compatible with our current experiments.
Of course your free to believe what you want. Blocking a field because you feeel that way is just being a jerk. Until we have an experimental challenge the ground you stand on is your own neural peculiarities. It’s sad sometimes to see people who should be the most open to the fact that we don’t know everything using their personal opinions to block others exploration. And yes there is active political hostility to quantum computing that is political and uninformed.
My predictions are no communication faster than light, no perpetual motion, no entropy-lowering machines, no cold fusion, no particle localization contrary to Heisenberg, and no super-Turing computation. But I am not blocking those fields. Go ahead and do research. If you ever make any progress, then poublish it. You might want to get tenure first, because you may never make any progress.
Roger, I think perhaps you misunderstand what is meant by predictions here. I hope Dave and Aram don’t mind me interjecting, but I am pretty sure that what Dave means by a prediction is a mathematical consequence of a consistent theory, rather, say, than what is meant by a prediction in the context of the grand national or a political election. In other words, a prediction in this context is not arbitrary nor an opinion but is fixed by the underlying assumptions of the proposed model.
Your post, on the other hand, seems to simply be a list of things you don’t think will happen (i.e. the grand national sense of ‘prediction’).
Really? Then they are living in some sort of dream world where they imagine theories that make predictions which are impossible in the real world. When I say that those things are impossible, I mean that they contrary to the laws of physics.
Roger, can you give a concise, simple example of a physical law which large-scale quantum computers would volate? You believe that quantum computers are impossible with what seems to be mathematical certainty. Surely to have such a strong believe you have commensurate evidence, right?
Mathematical certainty is for mathematical objects. My beliefs about physics are based on physical theories and observations. Thermodynamics says that certain things are impossible, but it is the experiments, not the mathematical laws. that are so convincing.
Can you point to a simple reproducible experiment which conclusively demonstrates that large-scale quantum computers are impossible? What I’m trying to understand is, what evidence do you have to support your belief?
I would have a Nobel Prize if I could do that.
What is frustrating about your (and other) skepticism is the apparent gap between your certainty that large-scale QC is impossible, and the evidence that exists in support of this idea.
What is the point in arguing with them? This was clear from the beginning. Gil Kalai wraps with mathematics arguments that contain no physics to make them sound better. Here is how he would argue against flying machines:
Conjecture A: There is a fixed constant distance H above the Earth above which humans cannot travel.
Conjecture B (refinement of Conjecture A): There are constants W, p such that objects of weight W – epsilon can travel at most a height H + O(epsilon^p) above the surface of the Earth.
Conjecture C: There is a fixed constant speed C beyond which humans cannot travel.
Okay, he might actually be onto something there.
It’s good that we don’t have to choose once-and-for-all between Gil’s “n0-go” conjectures and Aram’s refutation of them. Because if Kalai-style “no-go” conjectures are (admittedly) a Scylla of unknown dynamical physics, then Aram’s vision of quantum computing are (admittedly) a Charybdis of unmet technology development milestones.
Uhhh … maybe it’s prudent to steer a quantum course that is neither toward Scylla nor toward Charybdis? It’s a mighty big quantum ocean, after all, with many unexplored islands in it.
Well, I’m not claiming that quantum computers will be an easy technology to develop. If Gil argued that QCs would never be economically feasible, I would be far less confident in my counter-arguments. But I think we can be extremely confident of the fact that QCs are possible in principle, just like we could be extremely confident that BECs and lasers were possible in principle even before they were demonstrated experimentally.
I say that there are no good grounds for being “extremely confident of the fact that QCs are possible in principle”. What is the experiment showing that? Who got that Nobel Prize?
I think that Aram’s analogy is sound. If you believe the experimental evidence for something (e.g. Stern-Gerlach experiments) and the explanatory framework for it (e.g. quantum mechanics and the spin-statistics theorem) then you are naturally led to certain predictions (e.g. lasers and Bose-Einstein condensates). Even though it took many decades to observe a BEC, we were quite confident that it was in principle possible. 100 years ago, nobody would have believed that sub-microKelvin temperatures would ever be feasible, but that is not the same thing as in principle. So it was widely believed that BECs were possible.
This is exactly the case we have for large-scale quantum computers. The prediction follows from some simple not-so-controversial things like (1) quantum mechanics is valid at low energies, (2) quantum error correction and fault tolerance are possible with weakly correlated noise. I think we can agree on (1), or else the claim is quite radical… and that would truly be a revolution if we could prove (1) wrong, so it makes the search for quantum computers that much more interesting. But then (2) follows from the mathematics of quantum mechanics! The only wiggle room is in how you model the correlations in the noise (and this is where Gil’s objections lie). However, experimentalists are continuing to make more and more progress at reducing noise in quantum devices, and theorists are proving stronger and stronger noise models still allow fault-tolerant quantum computing. For these two pinchers to fail to meet in the middle in principle would require some very strange physics, and no one (including Gil) has given a convincing model for how that might come about.
Steve, although your reading of the physics literature reasonably supports Aram’s thesis, it is entirely feasible to read that same physics literature so as to reasonably support Gil’s thesis.
In support of both points-of-view, I have posted to TCS StackExchange precisely such a double-edged reading of the physics literature as an answer to the question “Physical realization of nonlinear operators for quantum computers.”
Historically speaking, many of the physicists who originated the key concepts of quantum dynamics — starting with Dirac and Heisenberg and continuing through Fadeev and Popov — had extensive training in classical dynamics, and for reasons that the above-linked TCS StackExchange answer reviews, this may be true of future generations too.
Perhaps future generations of physicists will appreciate that both Gil and Aram have made some very good points in their debate. I hope so, anyway! 🙂
I call Occam’s Razor. With non-Hamiltonian-but-symplectic flows (or any other proposal) you have the problem that you are introducing a bunch of baggage for no reason other than to rule out QC. Simple pedestrian quantum mechanics is enough to explain all of the low-energy physics experiments that have ever been done without the need to postulate nonlinearities, strangely correlated noise, or anything else. Sure if those peculiarities existed, then we might have reason to worry. But what motivation do you have for introducing them aside from trying to rule out QC?
Steve to put it another way, let’s consider the following widespread belief:
But when we examine the foundations of this belief — confining our attention to low-energy dynamics — we appreciate that:
• SPQM is not necessary to explain (e.g.) narrow resonant lines and linear dynamical response to small external fields.
• SPQM is not sufficient to explain (e.g.) gravity.
• SPQM is not adequate to explain (by any argument that at present is universally accepted) the ubiquitous non-observation of “Cat” states.
Thus even in the low-energy regime we have ample reasons to regard SPQM as neither necessary, nor sufficient, nor even adequate, to our understanding of physical dynamics.
That is why Gil Kalai’s conjectures are deserving of respect (IMHO): these conjectures help focus our creative attention upon these three aspects of SPQM.
Steve, for me Pangloss’ Privilege trumps Occam’s Razor. And so, here is a 21st century Panglossian narrative that makes *everyone* happy.
(1)Ă‚Â Gil is happy, for the physical reason that Nature’s state-space *is* a polynomial-dimension Secant join of Segre varieties (aka a produce-state manifold).
(2) The quantum computing community is happy too, because (in this Panglossian 21st century universe) the state-space dimension *is* large enough to support quantum computations having reasonable practical utility (say a few thousand qubits).
(3) Particle theorists are even happier, because the modest decoherence and linearity associated to this non-flat state-space manifest themselves physically as (what else?) Einsteinian gravity.
(4) Systems biologists and condensed matter physicists (same thing!) are happiest of all, because (for small molecules and nanostructures) it is perfectly feasible to boost Newton’s G by 30 orders of magnitude: in effect, reducing the dimensionality of quantum state-space; this “artificial gravitational viscosity” makes nanoscale quantum systems *far* easier to simulate, with negligible loss of engineering accuracy.
So, what’s not to like? 🙂
With respect to (4) above, this argument was conceived partially with tongue-in-cheek, and yet further reflection reveals substance to it.
Let us reflect that “simple pedestrian quantum mechanics” (SPQM) is known to fail for sufficiently strong gravitational fields, and is thought to be replaced by some (at present unknown) deeper theory, that we will call post-SPQM dynamics (or P-SPQM) for short.
P-SPQM dynamics may be easier than SPQM dynamics to simulate, or alternatively harder to simulate, and perhaps even several P-SPQM theories exist, some of which are easier to simulate and some harder.
Supposing that some easier-to-simulate variant of P-SPQM exists, then for nanoscale simulations we are free to boost the value of Newton’s $latex G$ by some factor of order $latex e^2/(4\pi \epsilon_0 G m_p m_e) \sim 10^{40}$, which is of course one of Dirac’s large numbers.
We thus see that the isolation of low-energy QM and high-energy QM is by no means as clean-cut as we might first think … we can readily conceive that frameworks for efficient quantum simulation at low energy can borrow naturally from the high-energy quantum gravity frameworks and vice versa.
Might physical insights conceived in the context of low-energy condensed-matter SPQM applications prove useful at the frontiers of high-energy QM and/or cosmology? This has happened many times before in quantum physics, and there is no particular reason to think it cannot happen again. If we are diligent and lucky, perhaps it will happen again! 🙂
I have extended the above Kalai/Harrow compromise framework as a comment on Gödel’s Lost Letter and P=NP.
And there is more to be said on this topic. For practical purposes, does it really matter all that much whether we “worship at the Church of the Larger Hilbert Space” versus “cultivate our garden” in the surrounding fields of (classically) simulable dynamics?
Here’s a thought experiment for the skeptics. Suppose we live in an alternate universe. In this universe, illustrious scientists like Peter Feynman and Richard Shor demonstrated that large-scale QC is not possible so long as quantum mechanics is valid and noise is not too strongly correlated. Would you then start looking for strangely correlated noise sources that would be the “key” to unlock large-scale QC? Maybe, if only because of the potential payoff. But then, after 15 years of research, no one has produced a concrete model, even one that had extravagant coordination between systems. Would you think QC was possible?
Neither would I, and that’s my point.
Steve, is fifteen years thinking about quantum computing really such a long time? After all, history shows us that separating rational, irrational, and transcendental numbers required four thousand years of mathematical effort. If the unveiling of KAM theory (for example) shows us plainly that the mysteries of classical dynamics require centuries of effort to unravel, how many centuries is the unraveling of quantum dynamics plausibly likely to require?
Tomorrow my TCS StackExchange bounty question “Does P contain languages whose existence is independent of PA or ZFC?” closes, and the question itself will be converted to a TCS StackExchange community wiki. This question is mainly concerned to evolve a better understanding of the boundary between P and NP.
Similarly, next month I hope to post on MathOverflow (MOF) a bounty question that seeks to evolve a better understanding of the boundary between efficiently simulable and infeasibly simulable dynamical systems (classical, quantum, and hybrid).
Suggestions posted here on Quantum Pontiff would be very welcome … it would be disappointing to discover that either class of question had simple, black-and-white answers. As Lance Fortnow has said of P versus NP
Surely this is true too of classical / quantum physics, such that in coming decades, both skeptical and non-skeptical insights will have central roles in our integrated appreciation.
Dear Steve and dear everybody,
I think that the nice thing regarding my debate with Aram is that we do not discuss so much the grand question if quantum computers are possible in principle or not, or various related “meta” issues (as discussed here), but rather we concentrate on smaller fragments of this large question. Moreover, there are various interpretation of my conjectures, and I think that the conjectures and the related endeavor to describe formally non-FT quantum evolutions are of interest also in a reality which supports universal quantum computers. Initially, in the draft of the first post, I described the various interpretations without even stating which interpretation I believe, but Aram asked me to mention also what I think is the correct interpretation and why. Actually, this discussion have led to my IBM example which we delayed to the present post.
My last post discussed three issues. The first is a summary of the quite fundamental distinctions between the world without quantum fault-tolerance and the world after quantum fault-tolerance. The second is a discussion of geometry and the conjecture that for quantum evolutions with complicated interactions the evolution tells us something about the geometry. The third is about the distinction between special-purpose machines for quantum evolutions and (the yet hypothetical) general-purpose machines. Here the conjecture is that what we are going to learn from special purpose machines (including the relation between noise and state, and correlated noise) may show us that the noise-rate for general purpose machines must scale up with the number of qubits which deems such general-purpose machines impossible. Discussing special-purpose machines voids both arguments made by several commentators that nature must be malicious as well as overly intelligent in order to cause quantum computation to fail. These three particular issues are rather specific and it can be interesting to discuss them. Of course, you all are most welcome to join this more technical discussion (over GLL). The earlier post discussed your counterexample (Steve) with Aram of my Conjecture C and, again, this is a specific interesting fragment of the large issue.
Of course, having a wide look on the entire issue and on various “meta” matters can also be of value. (In the polymath projects’ style we can regard this thread as a “discussion thread” while the threads over GLL are more like “research threads”.) Here are two comments:
Steve asked: “Would you then start looking for strangely correlated noise sources that would be the “key” to unlock large-scale QC?”
Sure! By all means, you bet. (And, as long as I feel that I have some new ideas, even after some period where me and others looked and did not find anything .)
I liked Dave’s comment. Dave wrote that he has some of the best ideas about why quantum computers won’t work in his back pocket. By all means, Dave, share these ideas. After all, exploring ideas is our business, isn’t it?
This debate seems to be a draw. No one has compelling arguments for QC, and no one has compelling arguments against it.
One argument is that people believed in Bose condensates before they were realized in the lab. Yes, but there was a lot of evidence for boson statistics. Did anyone think that anything else would happen when bosons were cooled? I doubt it.
Roger, that good arguments exist on both sides of the Kalai/Harrow debate does *not* imply stasis. Because at least three disequilibrating events have occurred since the release of QIST: A Quantum Information Science and Technology Roadmap (LA-UR-02-6900 [2002] and LA-UR-04-1778 [2004]):
• Progress toward scalable quantum computation technologies has been far slower than foreseen. Why?
• Progress in quantum dynamical simulation with classical resources has been far faster than foreseen. Why?
• Economic stagnation, debt loads, and declining research funding levels in North America and Europe have been far more severe than foreseen. Why?
Strategically speaking, we urgently need an updated QIST roadmap that acknowledges these realities: a roadmap that charts a vitality-restoring path forward. It is surprising — even dismaying — that there is so little explicit recognition and informed discussion of this need.
I have a question, quite possibly a silly one, but let me ask it anyway.
The analogy between controlled fusion and quantum computers discussed here was made a few times in the debate, and if I remember correctly even some similar issues regarding noise were mentioned. My question is if in some sense controlled fusion is a “special case” of large-scale universal quantum computers. Namely, given a large-scale universal quantum computer, is it possible to emulate the type of quantum nuclear processes demonstrated by a controlled fusion reactor, and moreover, if the answer is yes, can we also use quantum computers emulating controlled fusion processes to actually serve as controlled fusion devices?
Hi Gil,
No, controlled fission is not a special case of large scale quantum computation. It’s a particular set of physical processes. Our ability to simulate them on either a quantum or classical computer is independent of our ability to actually perform these tasks. This is because notions of classical and quantum computation are abstracted from any particular physical device, but fusion is a particular physical process. To give an example, it is entirely possible to simulate the mixing of two buckets of differently coloured paint on a classical computer. The dynamics can be simulated forward and backward in time, with essentially equal ease. However, if we try to do this with real paint, reversing the evolution is impossible.
This is pretty much the case with fusion too. It’s not necessarily that hard to simulate numerically, but actually building a device to do this with real nucleii is an entirely different matter.
At least two fundamental simulation challenges are associated to reducing the idea of plasma fusion to the reality of energy-producing fusion reactors, and it is striking that each challenge is associated to a Clay Institute Millenium Prize problem.
• Q1: How can we generate dynamically stable energy-producing plasmas? This problem is associated to the Navier-Stokes Millennium Prize problem, namely, these (classical) dynamical equations are sufficiently nonlinear, that we are not even sure they are well-posed, much less simulable with feasible resources.
• Q2: What are the scattering cross-sections of the energy-producing / energy-dependent nuclear fusion processes? This problem is associated to the Yang-Mills Millennium Prize problem, namely, these (quantum) dynamical equations are sufficiently nonlinear, that we are not even sure they are well-posed, much less simulable with feasible resources.
If there is a lesson here for FTQC, perhaps it is that the (seemingly less-glamorous) classical instability / noise-related dynamical challenges associated to FTQC may be similarly fundamental to the (seemingly more-glamorous) quantum dynamical and informatic challenges.
Certainly in controlled fusion, the classical problems have proven (unexpectedly!) to be comparably challenging mathematically — and considerably more challenging in a practical engineering context — than the quantum problems.
As a further not, it was Aram Harrow who inspired me to reflect upon the Lewis-Carrol-type riddle “How is the Lindblad equation like the Navier-Stokes equation?”
One the one hand, the two equations superficially have wholly different mathematical forms (Lindblad being linear and algebraic, Navier-Stokes being nonlinear and partial differential). Yet on the other hand — as Aram emphasized to me — Lindbladian dynamics should contain, implicitly within itself, a dynamical description of clocks (for example), and more generally arbitrary wave-form synthesizers, and still more generally all the nonlinear signal-processing gadgets of a modern QIT laboratory.
This leads us to appreciate that, with the same symmetry-breaking temporal regularity that tree-leaves in the wind spin-off Karman vortices, Lindbladian devices can spin-off clock pulses. Moreover, the symmetries that are broken in the Lindblad case include not only the time-translation symmetry of the Navier-Stokes case, but also the ambiguity in the operator-sum representation of the trajectory unravelling.
For the above reasons, it seems likely (to me) that our mathematical understanding of both Lindblad dynamics and Navier-Stokes dynamics — including our capabilities to computationally simulate these dynamics — are likely to evolve substantially in parallel.
And to make the analogy even closer, in Carmichael / Caves-style Hamiltonian/Lindbladian quantum trajectory unravellings, the Hamiltonian flows are (very broadly speaking) of Yang-Mills form, while the Lindbladian flows are (very broadly speaking) of Navier-Stokes form.
Nowadays, not too many folks receive training in both formalisms! As Garrett Hardin wrote in his celebrated essay “Extensions of `The Tragedy of the Commons'” (Science, 1998):
Although Garrett Hardin was himself an ecologist, his principles apply nowadays to STEM enterprises broadly … and yes, to QM/QC/QIT in particular (even especially).
Whoops — sorry for the loss of paragraph formatting. Hardin’s essay (243 citations) is worth reading in its entirety.
It seems (to me) reasonable to foresee a vital future for that QM/QC/QIT more-or-less to the extent that it is credibly relevant — even transformationally relevant — to Hardin’s concerns … concerns that were not explicitly addressed in the 2002 and 2004 QIST Roadmaps, and that perhaps (at that time) were not broadly appreciated.
One conceptual and one technical point.
I tried to avoid the notion “in principle” as much as possible. (I think that the only time I referred to it in the debate is when I claimed that a rock-concert with a million
participants is noisy, in principle.) It will be useful if Steve and Aram will try to explain what precisely they mean by this term. E.g., when saying “we can be extremely confident of the fact that QCs are possible in principle.” What is the factual matter here? Another term which is used but is rather unclear is “from first principles.”
There is no dispute that the model of (noiseless) quantum computers can be described using the language of quantum mechanics, does this make QC “possible in principle?” It is also undisputed that models of noisy quantum systems – both those used for quantum fault tolerance, and those which will not allow fault tolerance are, again, written using quantum mechanical language but they do not emerge from the quantum mechanics formulation. (Academic discussions and debates like mine with Aram can be useful in making such commonly used terms clearer.)
The technical point is that while correlations are important, a crucial element in the skeptical look of QC is that the noise-rate itself scales up for complicated evolutions.
This is related to correlations since for correlated noise, noise rate in terms of qubit-errors scale up compared to noise-rate in terms of trace-distance. When you describe a noisy computer via Preskill’s recent Hamiltonian model (and earlier related models) FTQC is possible if a certain norm which can be seen as expressing the noise rate is small. (So, given this norm, you can forget about correlation.) The skeptical point of view (which, for example, was expressed in a comment by Robert Alicki) is that this norm goes to infinity when we look at complicated open quantum systems with finer and finer details.
Yes, perhaps I overreached in talking about what everyone agrees on. I suppose a skeptical case could have been made against BECs, and that a “minimum temperature” is something that is not inherently implausible.
Aram, my question was different: when you use the term “in principle” or “from basic principles” what formally do you mean? (This question is not part of our debate on the feasibility of quantum error-correction but rather an attempt to clarify some of the terms we use.)