The Viewpoint Skeptics of Quantum Computers Don’t Want You To Hear

Quantum computers are fascinating devices.  Our current understanding of these devices is that they can do something that classical computers cannot: they can factor numbers in polynomial time (thank you Peter Shor!)  Interestingly, however, we can’t prove that these devices outperform classical computers on any class of problems.  What this means is something very particularly: we can’t show that the model of a quantum Turing machine can solve problems more efficiently than the classical model of a Turing machine.  Complexity theorists say that we can’t show that BPP does not equal BQP.  Complexity theorists remind me of my son learning new letters.  Sorry I can’t help it.  S. T. O. P. spells….stop!

A dark secret (okay it’s not really secret, but this is a blog) of classical computing is that we (or rather, they, since I’m as much a complexity theorist as I am handsomely good looking) also can’t say a lot along the same lines about classical computers.  The most famous example of this, currently (2012), is that classically we don’t know whether there are computations which take polynomial (in the size of the problem) space and unlimited time, but can’t be done with just a polynomial (in the size of the problem) limit in time.  In jargon this is the fact that we don’t know whether P (or BPP) equals PSPACE.  That’s a huge gap, because PSPACE includes nearly everything in the sun, including computers which use time machines.  That’s right.  Classical complexity theory has yet to show that computers that use frickin’ time machines aren’t more powerful than the laptop I’m typing this on.

A reasonable person, I would think, given this state of affairs, would admit that we just don’t know and try to figure out more about the model of quantum computation.  Interesting, however, academia attracts an interesting class of hyper smart person who try to get places in life by being contrarian.  That’s great when it leads to results, and often it does.  Being skeptical is an important part of the scientific process.  But when it doesn’t lead to results, which I think is the current state of arguments about quantum computers, it leads to senior professors acting very unprofessionally, and stifling a field.  Quantum computing is in exactly this state of existence.  I can count the number of jobs given to theorists in quantum computing over the last decade on my hands.  It’s far greater than the number of senior folks I’ve talked to who are credulously skeptical of quantum computers and show know better (i.e. they’ve at least read the relevant papers.)  The number who are skeptical but who haven’t actually read the papers?  My registers don’t count that high.

For an example of this phenomenon, hop on over at to the awesome and widely read blog Godel’s Lost Letter and P=NP where one of the coauthors of the blog Ken Regan has a post describing some work on trying to understanding the limits of quantum computers.  That’s great!  But in this post, Ken, who is an associate professor, can’t help but in a dig at quantum computers along the lines of “we can’t prove that it can’t do anything”:

But there is no proof today—let me repeat, no proof—that quantum circuits in BQP are not easy to simulate classically

Because I like poking tigers, and am no longer beholden to the whims of an academic community that strongly rejects quantum computing, I posted a comment (okay I’d post this even when I was a psuedo-professor) which included the last line

Oh, and, p.s. there is also no proof that classical circuits can’t solve NP-complete problems efficiently, but for some reason I don’t see that in all of your posts on classical computers ;)

to which Ken responded

As for “no proof”, Dick provided some thoughts which I merged into my intro; I pondered upgrading that line to add “…, nor even a convincing hardness argument”—but thought that better left-alone in the post

So you can see the kind of thoughts that go through many theoretical computer scientists which confronted with quantum computing.  Instead of “lets figure this out” the response is “I want to remind you that we haven’t proved anything, even though we also haven’t proved the same thing about likely even more powerful models of computation.”  If you don’t think this isn’t a case of bias in academia, then you’re reading a different novel than I am.  And if you don’t think this has an impact on junior academics, please see the correlation evidence of past hiring in academia (Or if you don’t like that: do an experiment.  Give the damn people the jobs to hang themselves by.  Or at least don’t give them advice to avoid quantum computing because of your own biases, I’m looking at you, you know who you are.  Pffst!)

Like I said, however, I think focusing on making actual progress in understanding quantum computers is the important path to take (and to the credit of Ken, who I’m picking on simply because he’s at the top of the temporal queue of a long line of guys who like to pontificate about the power of quantum computers without having any arguments that go beyond “I think…”, he has tried to answer this question.  But not without throwing in a backhand that he seems to find utterly professionally appropriate.)  And of course the previous two paragraphs are enough of the same ad slander’n reasoning, but exactly from my own completely biased perspective.  But toward being *ahem* productive, I’m completely convinced that quantum computers offer significant, proven, reasons to be built.  This is a controversial statement, because I know all complexity theorists will disagree with this point of view.  So this is aimed at the group of people with minds open enough to think not about complexity classes, but about real world experiments (we might call them, physicists.). 😉

The argument is almost as old as quantum computing itself.  These are the so called “black-box” query complexity results in quantum computing, albeit as seen through a physicist’s measuring device.  What these models do is as follows.  They consider a set of black box functions (say functions from n bits to 1 bit, so-called binary functions) and ask one to identify something about this set of black box functions.  For example, the set of functions could be all binary functions that are either constant (on all inputs they output 0 or on all inputs they output 1) or balanced (on half of inputs they output 0 and on the other half they output 1). Then the problem would be to distinguish whether, if I give you a machine that implements one of these functions, whether the function is constant or whether it is balanced.  Then one “measures” the effectiveness of an algorithm for solving this by the number of times that you have to use the black box in order to figure out which set the function belongs to.

So what is the state of query complexity differences between classical and quantum computers?  It can be proven that there are black box problems that can be solved by quantum computers using a polynomial number of queries in the size of the problem, but that require an exponential number of queries classical.  That’s right.  There is a proven exponential separation.  (For those who would like to argue that the comparison is not fair because a quantum device that computes a function implements a different physics than that which gives you a classical computation, I would only note that our world is quantum mechanical, and we can compare a quantum querying of the quantum device to a classical one.  A classical query of this quantum device is exponentially less efficient.)

At this point you may then wonder why all of the fuss about quantum computers not being proven to be more powerful than classical computers.  The answer is interesting and starts with the way we set up the problem.  We were given a black box that computes a classical function.  We can think about this literally as a machine that we can’t probe any deeper into how it actually works.  In this respect it is a sort of a-physical device, one that isn’t connected to the normal context of what a computation is (as modeled by, say a parallel Turing machine.)  Suppose that this were a real physical device, then you could take it apart and look at how it worked.  This means that you could get more information about the computation being performed.  And when you allow this, well, it is then not clear that you couldn’t solve the problems for which quantum computers offer speedups just as fast on a classical computer.  Thus while we know that with respect to these black box problems, quantum computers are exponentially faster than their classical brethren, we can’t carry this over to statements about models of computation.

But take a step back.  Suppose you are an experimental physicist and I give you a black box and ask you to figure out whether the box implements one or another sets of functions.  Well then, if you use this experimental device without peering into its innards, then you really really want to use a quantum computer for your experiment.  The difference between exponential graduate students and polynomial graduate students is most certainly something that will get your grant funded by the NSF.  Because the universe is quantum mechanical, damnit, and if you want to perform experiments that more quickly reveal how that universe operates, you’ve got to query it quantum mechanically to be most efficient.

Okay, you may not be convinced.  You may argue that at its heart you can’t ever have a box that one can’t take apart and probe its innards (can you?)  Fine.  So I’ll modify the game a bit.  I’ll give you a quantum system that is the output of the standard way this device if queried in these computeres $sum_x |x> |f(x)>$.  Now you either get to query this using only classical measurements on this device in the computational basis, or you get to use the full power of quantum computers, querying with a measurement you can build in your quantum laboratory.  In that case, you can show that a quantum experimentalist will exponentially outperform characterizing this state, i.e. solving the given promise problem.  Think about this as a game, a game in which you can win by being quantum mechanical exponentially faster than you could being classical.

Of course this won’t convince anyone, especially not classical theoretical computer scientists (who once were at the vanguard of a totally new field, but now find themselves defending their own legacy code.) Does it at least pass the test of trying to present evidence in either direction for the power of quantum computers?  Not really, for those who refuse to believe that quantum theory isn’t actually the right theory of nature.  But it does seem to tell us something is fundamentally very very different about the ability to use quantum theory in a setting where you’re trying to extract information about an unknown quantum system.  And it’s proven.  And it’s not a way of thinking that the old guys would like you to think 🙂

9 Replies to “The Viewpoint Skeptics of Quantum Computers Don’t Want You To Hear”

  1. Hi, Dave. My Q asked for a hardness argument. Putting Graph Iso into BQP might do it for me, as I’m on the fence about its being in (BP)P—there’s a reasonable cluster of natural problems that would make BQP hard for, bigger than the Factoring/DiscLog/solved-HSG cluster IMHO. As for the black-box argument, I took a stab at arguing it’s an incomplete comparison here and here—what if the classical algorithms are allowed free use of algebraic extensions?

    1. Isn’t factoring a better candidate for hardness than graph isomorpishm?

      1) We know distributions that (empirically) resist all known algorithms.
      2) There is tremendous commercial and military interest in finding poly-time factoring algorithms, and yet (as far as we know) none have been found.

      Of course it’s reasonable to conjecture that both are in P.

      As for algebraic extensions, Aaronson-Wigderson shows that this still permits exponential oracle separations (unless I’m misunderstanding what you mean here):
      http://rjlipton.wordpress.com/2011/11/14/more-quantum-chocolate-boxes/#comment-15262

  2. Thanks for the links Ken. I think I’ve read the first, but not the second.

    Not sure if I understand you q about algebraic extensions. If they arise in real “classical” computers w/ real correct accounting of physical overhead then yes 🙂

    But I think the point I was trying to make is beyond the black box setting. It’s a game where you are given a quantum state and you want to extract info about it. Now a) quantum theory may fail, so this is a silly limit, and no exponentially fewer measurements are needed, b) quantum theory may be alright, but the actual experiment may not scale as desired due to something we don’t know blocking fault tolerant quantum computing, or c) all is as I describe. Importantly, this is an information screening trick: the black box is gone, and I’ve (say) sent the state to you in photons. Unless you believe in really crazy hidden variable theories (it’s okay Ben Toner and I wrote part of such a model and lived to survive) then the only object you get is the system with this appropriate state.

    Another version of this which is even more powerful are quantum communication complexity results. There are measurements on bipartisan quantum states that require exponential (in number of qubits for a part) to simulate clasically (i.e. with infinite shared classical randomness and x bits of classical computation, then x must scale exponentially).

    It seems to me that hard to argue against information processing being radically different quantum mechanically without going towards extraquantum theories or arguments about practicality. “Quantification” arguments are really hidden versions of these arguments, and eventually subject to experiment.

    As to hardness results, GI in BQP wouldn’t convince me anymore than the fact that I don’t try to write code to solve NP complete problems in polynomial time in my day job.

  3. Dave, you’re decoherence seems to be spontaneously reversing. New quantum papers, new quantum posts, etc. Is there some kind of spin-echo thing going on? It seems Google is non-Markovian.

  4. Dave, I +1 Joe’s comment. I’ll have to think a bit more about your communication setting. My past 120 minutes have been “de-” (or “in-“)coherent themselves, none of it having to do with the GLL blog. I seem to have made the official version of something I never got within 3,000 miles of, even beating the truly official guy who’s 3,000 miles further in the other direction. And I’m running 2 cores on it already, 2 more at bedtime. This is all thanks to a little bear with a NY Yankees cap who was just now in Tully’s Coffee at 2800 160th Pl NE in Bellevue, across the way from you. I am not kidding—it has a grand piano.

    Anyway, my skepticism is not exponential but rather quadratic—I think the standard quantum circuit model understates the cost of operation, in a way that cancels (most of) the gains from Grover search. I’m starting to flesh out why for a post Dick and I intend after the last debate post, but some of the groundwork is in the “Grilling” post, where I wrote:

    What makes […] a plausible component of an entanglement measure is that it is additive for disjoint systems, i.e. for products of polynomials on disjoint sets of variables, and is non-increasing under projections hence amenable to substitutions. Multiway entanglement measures on N qubits are notoriously difficult to define, but we may hope the polynomial representations provide a salient measure that also bounds the true complexity of building and operating the circuit.

    Thus I’m actually a Shor believer, though I think the problem it solves is in P anyway. Hence I don’t disagree with the speedups you note, even their practicality. My original comment about hardness evidence, however, is in line with what we quoted from Dan Simon himself at the second link I gave, which I confirmed by private e-mail with him.

  5. Ken, do you mind me asking where you stand on the hardness of simulating quantum dynamics? This is perhaps the most trivial use of a quantum computer, but it is a problem that has been studied extensively for many decades in terms of classical computation. Lots of special cases can be solved efficiently classically, but the general case looks extremely hard.

      1. I must admit, I haven’t stayed entirely up to date on this, but the post-selection argument seems very powerful.

        I find this statement hard to reconcile with your position that there is not a convincing hardness argument for quantum circuits. We know how to simulate reasonable Hamiltonians in the circuit model via Trotterization. Further, Bremner, Josza and Shepherd have a very nice paper on simulating a restricted class of quantum circuits, and show that efficient simulation of that class would imply a collapse of the polynomial hierarchy, which I assume is something you would consider extremely unlikely [1].

        [1] http://rspa.royalsocietypublishing.org/content/467/2126/459.short

Leave a Reply to Joe Fitzsimons Cancel reply

Your email address will not be published. Required fields are marked *