The Viewpoint Skeptics of Quantum Computers Don’t Want You To Hear

Quantum computers are fascinating devices.  Our current understanding of these devices is that they can do something that classical computers cannot: they can factor numbers in polynomial time (thank you Peter Shor!)  Interestingly, however, we can’t prove that these devices outperform classical computers on any class of problems.  What this means is something very particularly: we can’t show that the model of a quantum Turing machine can solve problems more efficiently than the classical model of a Turing machine.  Complexity theorists say that we can’t show that BPP does not equal BQP.  Complexity theorists remind me of my son learning new letters.  Sorry I can’t help it.  S. T. O. P. spells….stop!

A dark secret (okay it’s not really secret, but this is a blog) of classical computing is that we (or rather, they, since I’m as much a complexity theorist as I am handsomely good looking) also can’t say a lot along the same lines about classical computers.  The most famous example of this, currently (2012), is that classically we don’t know whether there are computations which take polynomial (in the size of the problem) space and unlimited time, but can’t be done with just a polynomial (in the size of the problem) limit in time.  In jargon this is the fact that we don’t know whether P (or BPP) equals PSPACE.  That’s a huge gap, because PSPACE includes nearly everything in the sun, including computers which use time machines.  That’s right.  Classical complexity theory has yet to show that computers that use frickin’ time machines aren’t more powerful than the laptop I’m typing this on.

A reasonable person, I would think, given this state of affairs, would admit that we just don’t know and try to figure out more about the model of quantum computation.  Interesting, however, academia attracts an interesting class of hyper smart person who try to get places in life by being contrarian.  That’s great when it leads to results, and often it does.  Being skeptical is an important part of the scientific process.  But when it doesn’t lead to results, which I think is the current state of arguments about quantum computers, it leads to senior professors acting very unprofessionally, and stifling a field.  Quantum computing is in exactly this state of existence.  I can count the number of jobs given to theorists in quantum computing over the last decade on my hands.  It’s far greater than the number of senior folks I’ve talked to who are credulously skeptical of quantum computers and show know better (i.e. they’ve at least read the relevant papers.)  The number who are skeptical but who haven’t actually read the papers?  My registers don’t count that high.

For an example of this phenomenon, hop on over at to the awesome and widely read blog Godel’s Lost Letter and P=NP where one of the coauthors of the blog Ken Regan has a post describing some work on trying to understanding the limits of quantum computers.  That’s great!  But in this post, Ken, who is an associate professor, can’t help but in a dig at quantum computers along the lines of “we can’t prove that it can’t do anything”:

But there is no proof today—let me repeat, no proof—that quantum circuits in BQP are not easy to simulate classically

Because I like poking tigers, and am no longer beholden to the whims of an academic community that strongly rejects quantum computing, I posted a comment (okay I’d post this even when I was a psuedo-professor) which included the last line

Oh, and, p.s. there is also no proof that classical circuits can’t solve NP-complete problems efficiently, but for some reason I don’t see that in all of your posts on classical computers ;)

to which Ken responded

As for “no proof”, Dick provided some thoughts which I merged into my intro; I pondered upgrading that line to add “…, nor even a convincing hardness argument”—but thought that better left-alone in the post

So you can see the kind of thoughts that go through many theoretical computer scientists which confronted with quantum computing.  Instead of “lets figure this out” the response is “I want to remind you that we haven’t proved anything, even though we also haven’t proved the same thing about likely even more powerful models of computation.”  If you don’t think this isn’t a case of bias in academia, then you’re reading a different novel than I am.  And if you don’t think this has an impact on junior academics, please see the correlation evidence of past hiring in academia (Or if you don’t like that: do an experiment.  Give the damn people the jobs to hang themselves by.  Or at least don’t give them advice to avoid quantum computing because of your own biases, I’m looking at you, you know who you are.  Pffst!)

Like I said, however, I think focusing on making actual progress in understanding quantum computers is the important path to take (and to the credit of Ken, who I’m picking on simply because he’s at the top of the temporal queue of a long line of guys who like to pontificate about the power of quantum computers without having any arguments that go beyond “I think…”, he has tried to answer this question.  But not without throwing in a backhand that he seems to find utterly professionally appropriate.)  And of course the previous two paragraphs are enough of the same ad slander’n reasoning, but exactly from my own completely biased perspective.  But toward being *ahem* productive, I’m completely convinced that quantum computers offer significant, proven, reasons to be built.  This is a controversial statement, because I know all complexity theorists will disagree with this point of view.  So this is aimed at the group of people with minds open enough to think not about complexity classes, but about real world experiments (we might call them, physicists.). 😉

The argument is almost as old as quantum computing itself.  These are the so called “black-box” query complexity results in quantum computing, albeit as seen through a physicist’s measuring device.  What these models do is as follows.  They consider a set of black box functions (say functions from n bits to 1 bit, so-called binary functions) and ask one to identify something about this set of black box functions.  For example, the set of functions could be all binary functions that are either constant (on all inputs they output 0 or on all inputs they output 1) or balanced (on half of inputs they output 0 and on the other half they output 1). Then the problem would be to distinguish whether, if I give you a machine that implements one of these functions, whether the function is constant or whether it is balanced.  Then one “measures” the effectiveness of an algorithm for solving this by the number of times that you have to use the black box in order to figure out which set the function belongs to.

So what is the state of query complexity differences between classical and quantum computers?  It can be proven that there are black box problems that can be solved by quantum computers using a polynomial number of queries in the size of the problem, but that require an exponential number of queries classical.  That’s right.  There is a proven exponential separation.  (For those who would like to argue that the comparison is not fair because a quantum device that computes a function implements a different physics than that which gives you a classical computation, I would only note that our world is quantum mechanical, and we can compare a quantum querying of the quantum device to a classical one.  A classical query of this quantum device is exponentially less efficient.)

At this point you may then wonder why all of the fuss about quantum computers not being proven to be more powerful than classical computers.  The answer is interesting and starts with the way we set up the problem.  We were given a black box that computes a classical function.  We can think about this literally as a machine that we can’t probe any deeper into how it actually works.  In this respect it is a sort of a-physical device, one that isn’t connected to the normal context of what a computation is (as modeled by, say a parallel Turing machine.)  Suppose that this were a real physical device, then you could take it apart and look at how it worked.  This means that you could get more information about the computation being performed.  And when you allow this, well, it is then not clear that you couldn’t solve the problems for which quantum computers offer speedups just as fast on a classical computer.  Thus while we know that with respect to these black box problems, quantum computers are exponentially faster than their classical brethren, we can’t carry this over to statements about models of computation.

But take a step back.  Suppose you are an experimental physicist and I give you a black box and ask you to figure out whether the box implements one or another sets of functions.  Well then, if you use this experimental device without peering into its innards, then you really really want to use a quantum computer for your experiment.  The difference between exponential graduate students and polynomial graduate students is most certainly something that will get your grant funded by the NSF.  Because the universe is quantum mechanical, damnit, and if you want to perform experiments that more quickly reveal how that universe operates, you’ve got to query it quantum mechanically to be most efficient.

Okay, you may not be convinced.  You may argue that at its heart you can’t ever have a box that one can’t take apart and probe its innards (can you?)  Fine.  So I’ll modify the game a bit.  I’ll give you a quantum system that is the output of the standard way this device if queried in these computeres $sum_x |x> |f(x)>$.  Now you either get to query this using only classical measurements on this device in the computational basis, or you get to use the full power of quantum computers, querying with a measurement you can build in your quantum laboratory.  In that case, you can show that a quantum experimentalist will exponentially outperform characterizing this state, i.e. solving the given promise problem.  Think about this as a game, a game in which you can win by being quantum mechanical exponentially faster than you could being classical.

Of course this won’t convince anyone, especially not classical theoretical computer scientists (who once were at the vanguard of a totally new field, but now find themselves defending their own legacy code.) Does it at least pass the test of trying to present evidence in either direction for the power of quantum computers?  Not really, for those who refuse to believe that quantum theory isn’t actually the right theory of nature.  But it does seem to tell us something is fundamentally very very different about the ability to use quantum theory in a setting where you’re trying to extract information about an unknown quantum system.  And it’s proven.  And it’s not a way of thinking that the old guys would like you to think 🙂

To Catch Terrorists, Think Quantum Mechanically?

[latexpage]

An interesting paper in PNAS from a few years back that I missed, “Strong profiling is not mathematically optimal for discovering rare malfeasors” by William H. Press.

Suppose you have a large population of people and a single bad guy who you want to catch.  You could look through the entire population to  find the bad guy, or you could set up checkpoints (err I mean airline security screening areas) to look for people, sampling only some small fraction of the population that goes through the checkpoint.   Now, if you don’t know anything about the population you’re sampling, you might as well just sample them randomly until you find your baddie, since you don’t have any other information that could help you.

But suppose that you are able to come up with some prior probabilities for different people to be the bad guy.  I.e. you’ve developed a model for what makes someone more likely to be a bad guy, and further assume that this model is really pretty accurate.  To each person you can assign a probability $p_i$ that the person is indeed the bad guy.  Now you could continue to sample uniformly, but you have this great model that you want to use to help catch the bad guy.  What do you do?

It turns out that the wrong thing to do is to sample in proportion to the probability $p_i$.  To figure out the correct strategy, suppose that you sample from the population with probability $q_i$.  Then if $k$ is your man, the probability that you get him in one sample is $q_k$.  Or, another way to say it is that the mean number of screenings you’ll need to find the baddie is $1/q_k$.  Assuming your model is correct, the mean number of people you will have to sample is $sum_i p_i/q_i$.  So now to calculate the optimum we need to minimize this expression for $q_i$ subject to the constraint that $sum_i q_i =1$.

To calculate this optimum, you use a Lagrange multiplier

$${partial over partial q_j} left[ sum_i {p_i over q_i} + lambda (sum_i q_i -1)right]=0$$

or

$$ – {p_j over q_j^2} + lambda = 0$$

Which, in order to satisfy our contraints (and also positive probabilities) gives us the answer for the optimum of

$$ q_j = { sqrt{p_j}  over sum_i sqrt{p_i}}$$

Or, in other words, you should sample proportional to the square root of the probabilities.  Pretty cool, a nice easy, yet surprising answer.

Even more awesome is that we got some square roots of probabilities in there.  Quantum probability amplitudes are, of course, like square roots of probabilities.  Now if only we could massage this into insight into quantum theory.  Do it.  Or the terrorist win.