In Defense of D-wave

The Optimizer has gotten tired of everyone asking him about D-wave and gone and written a tirade about the subject. Like all of the optimizer’s stuff it’s a fun read. But, and of course I’m about to get tomatoes thrown on me for saying this, I have to say that I disagree with Scott’s assessment of the situation. (**Ducks** Mmm, tomato goo.) Further while I agree that people should stop bothering Scott about D-wave (I mean the dudes an assistant professor at an institution known for devouring these beasts for breakfast), I personally think the question of whether or not D-wave will succeed is one of the most important and interesting questions in quantum computing. The fact that we interface with this black box of a company via press releases, an occasional paper, and blog posts at rose blog, for me, makes it all the funner! Plus my father was a lawyer, so if you can’t argue the other side of the argument, well you’re not having any fun! So, in defense of D-wave…

The Optimizer begins by with a list of questions from the skeptic:

Skeptic: Let me see if I understand correctly. After three years, you still haven’t demonstrated two-qubit entanglement in a superconducting device (as the group at Yale appears to have done recently)?

Um, well, actually, Optimizer, entanglement has been demonstrated before the Yale group in superconducting qubit device. In phase qubits, I believe the Martinis group created entanglement between two qubits in 2006 (Science paper or if you want Bell inequality violations see this Nature paper.) As far as I know, no one has conclusively demonstrated entangled quantum states in flux qubits, which is what D-wave is using (the transmon qubits at Yale are charge superconducting qubits, right?) Okay, so well your facts are a little off Optimizer! But of course the real reason you bring this up is because you know (for pure states) that without entanglement there will be no quantum speedup. Actually I think one has to be very careful here as well. For example, in Richard Jozsa’s wonderful article on simulating non-entangled systems its not clear to me that these results can be used to rule out polynomial speedups for non-entangled states (update after Scott’s comment below: damnit here I meant to say, slightly entangled states. The relevance being that for these states it may be difficult to detect their entanglement even though they are useful for quantum computing.). And of course the question for mixed states (a.k.a. the real world) is still open. So I would say that the “entanglement” question is not settled. And I might even argue that the reason you need to worry about this is in quantum computing’s very history: it was well known that linear optics could not be used to quantum compute, but then, WHAM, KLM showed that if you had single photons and could detect single photons, you could build a quantum computer. Are we really that confident that quantum systems living somewhere just on the other side of entangled are not a useful resource. Of course my intuition is that for exponential speedups, yes, entaglement is necessary. But polynomial speedups?
Continuing…

You still haven’t explained how your “quantum computer” demos actually exploit any quantum effects?

Please define “quantum effects.” Also please read arXiv:0909.4321. That’s an interesting paper, and I definitely agree that it doesn’t demonstrate what I would call “quantum effects” it shows pretty clearly that the quantum description of what is going on in their flux qubits seems correct. And if you’re going to build an adiabatic quantum computer, what you really care about is that you have well characterized your Hamiltonian and understand the physics of that system.
Onward…

While some of your employees are authoring or coauthoring perfectly-reasonable papers on various QC topics, those papers still bear essentially zero relation to your marketing hype?

I hate the term “reasonable papers.” Sorry. It sounds like the quantum computing gestapo to me. But beyond that what hype are you talking about in press releases. Their news section has absolutely zero about their latest NIPS demo (which is apparently what set you off, Dr. Optimizer.) If anything, I think your beef has to be with the science journalists who are producing articles on the recent paper or with Hartmut Neven whose blog post on the google research blog has more meat to argue about (the last lines are classic.)
Onward:

The academic physicists working on superconducting QC–who have no interest in being scooped–still pay almost no attention to you?

Argument by authority? Really?

So, what exactly has changed since the last ten iterations?

Actually if you read the NIPS demo paper you would see that there is some interesting new stuff. In particular you would note that they believe they have 52 of their 128 “qubits” functioning. Independent of whether this thing quantum computes or represents a viable technology, getting 52 such flux qubits to operate in controllable manner such that they can read out the ground state to the combinatorial problem at all is, in my opinion, an impressive feat. The fact that they thought they would be at 128 qubits about a year ago is also a warning to me that this shit is hard. Also the paper gives a nice list of the “problems” they are encountering. In particular they acknowledge here the difficulties arising due to finite temperature and to parameter variability. You’d also read that their classifier doesn’t outperform the one they compare against for false-positives (and the real issue with that paper is that comparison, and no comparison of running times! So yes there is something new here and yes it is interesting and yes it still makes me skeptical of D-wave’s chances!)

Why are we still talking?

Good question! I hope you will forgive me, Optimizer, for I have sinned.
Okay, well now that I’ve got that out of my system. Whew. The rest of the Optimizer’s discussion of D-wave rings partially true. Though his detour into criticizing their AQUA at home seems silly to me (who cares, really? Have you really met someone who makes the argument presented? Was he or she surrounded by Dorothy and the Tin Man?) Certainly the fact that they are working with Google doesn’t convince me of much (sorry Google.) But I will stand by my criticism that just saying “quantum coherence” and “entanglement witness” as the things that must be demonstrated for D-wave to make an interesting device is wrong. Indeed, I’d probably argue that the reason quantum computing folks have made slow progress is that they themselves are hung up on this approach to building a quantum computer. It sure appeals to the scientist in everyone to validate every stage of everything you do, but technology development is different than science. For D-wave validating that their final and initial Hamiltonians are working as they think is important, but beyond that do they really care about whether they create entanglement? Of course, it’s my own opinion that their system will fail (finite temperature and problems with parameter controls in the middle of the computation) but holding them to the quantum gate standards doesn’t do it for me (though everytime I see their slogan I choke…quantum computing company? How about quantum technology company guys?) And the fact that we get our information second hand makes this whole argument rather academic: we don’t really know what is going on behind the walls of their Surrey, B.C. offices (cue conspiracy theories.)
Like my pa said: if you can’t defend both sides, your not having fun 🙂 (Hey I said tomato throwing, not watermelons or knives! **Ducks**)
Oh, and peoples, stop pestering Scott about D-wave, he’s just shown a major result in quantum computing and should be out celebrating (the jalapeno burger and beer at <a href="http://www.miracleofscience.us/"Miracle of Science is good. I think I owe him one next time I’m in Boston.)

29 Replies to “In Defense of D-wave”

  1. @Stas: not sure why Neven’s company having been acquired by Google has much relevance. Far as I can tell the dude does some pretty amazing image recognition work (but that’s not my field.)

  2. “its not clear to me that these results can be used to rule out polynomial speedups for non-entangled states.” For separable pure states, isn’t it clear that you can simulate classically in linear time, by keeping track of the state of each qubit? What am I missing?”
    Ah, crap I meant to write about the “nearly separable states” results, which, to me, might be more closely related to the mixed state separable state problem.
    “In any case, what seems relevant to the D-Wave discussion is that all known quantum algorithms require entanglement—I’m not aware of any evidence that one can get a speedup with separable mixed states. That casts some doubt on their idea of using separable mixed states to solve their customers’ industrial optimization problems.”
    I guess the point I try to make is that neither side has any real case to stand on. Theorists can’t say “mixed state separable” so no quantum speedup, just as they can’t say we have quantum speedup even though our system is “mixed state separable.” Of course we both suspect the answer…
    “I can only respond that they has their job and I has mine. And part of my job, as I see it, is to speak truth to cringeworthy claims, in those situations where ignoring them would seem tantamount to assent.”
    Indeed, it is a good job, a worthy job. But from my view I’d say D-wave has gotten a lot better about this recently than in the past. I mean their old CEO said some whoppers that would make the Burger King blush.
    “6. Dave, I envy your sunny disposition, which is able to see D-Wave’s “black box” nature as all part of the fun. I hope you can see my crankiness as part of the fun as well.”
    Life would be boring without something fun to stomp our feet about.

  3. Oh, and about Babbage. Damn it we shouldn’t accept mediocre progress. We should be pushing ahead as fast as possible! We don’t have to be in 1840. This is independent of the intellectual merit of quantum computing (which I also think is high.) This is a question of building a brand new technology that we know can be made, but that we dedicate seriously too few resources to. We need to dream big, and go big, damnit. If we constantly say “quantum computers will never be built in my lifetime” then of course quantum computers will never be built in our lifetime. If instead we say damnit we need to build one of these things quickly, then the worst than can happen is that we fail.
    For me the correct time metaphors is that 1994 is equivalent to 1934 (Turing). It took about 14 years to the invention of the transistor. We should be at the invention of the transistor now. We are behind.

  4. Thanks for the great post! There are many important questions here (and hopefully enough tomato goo to go around this holiday season). I agree that the question of what constitutes quantum evidence and/or evidence of entanglement is the big one here, and the path taken by D-Wave is interesting (and unsettling) precisely because this question is not so simple.
    Note that spectroscopic evidence for entanglement in superconducting circuits was, I believe, first seen in Saclay in the 80s. While more recent experiments, as done at Santa Barbara and Yale, meet the gold standard for entanglement through quantum state tomography, it is possible that indirect quantum evidence such as spectroscopy and/or phase diagrams could be the only detectable signal of a system such as D-Wave’s (and such experiments have been performed and published). Unfortunately, indirect evidence is, well, not direct.
    Does this mean we should simply ignore such systems? While sticking with the gold standard is certainly safe (and Glenn Beck approved), it is not surprising that a semi-quantum-device-without-directly-verifiable-entanglement gets VC attention. And providing a theoretical framework to analyze the (indirectly inferred) entanglement of such a system seems like an interesting problem (e.g. the work of Vedral), independent of quantum computation. Without invoking such a framework, the hypothetical skeptic adds little to the noise.
    So, ignoring the noise and the hype, I think there is plenty to enjoy in the D-Wave saga (personally, I think the testing of the coupling network alone is worth the effort, although I would want to know more about their properties at microwave frequencies).

  5. Thanks for the review and links! I didn’t realize that the first author (Hartmut Neven) was the person whose start-up had been acquired by Google in 2006 and who worked in connection with D-Wave after that. This makes the whole thing way more suspicious…

  6. I agree with you that not enough work has been done on understanding the role of entanglement in mixed-state computation. However, I see your Richard Jozsa paper and raise you a http://arxiv.org/abs/quant-ph/0505213 , which indicates that entanglement may well still be important in mixed-state algorithms. The jury is still out.
    Nevertheless, it seems to me that this issue is beside the point. As far as I am aware, D-Wave claim to be implementing the adiabatic algorithm for solving NP-complete problems. The theoretical version of this algorithm requires pure states. As far as I am aware, a mixed state version of this algorithm has not been analyzed so I have to admit that I am completely stumped as to what D-Wave are really up to.
    Regarding adiabatic algorithms in general, I am a bit skeptical that they offer any real advantages over conventional approaches to NP-complete problems. There is a bit of a debate going on at the arXiv at the moment about this. To summarize, it seems that the algorithm requires exponential time on random instances (http://arxiv.org/abs/0908.2782), but the Farhi group hit back with a modified algorithm that might remove this difficulty (http://arxiv.org/abs/0909.4766). Again, the jury seems to be out, but personally I would be extremely surprised if you could get more than a polynomial (probably quadratic) improvement over classical methods in the general case.
    With all this uncertainty it seems hard to understand what the claim in the Google blog post to have achieved better results than classically possible means.
    Addendum: I am recommending that we all immediately switch to Bing as our search engine of choice, since I think we should base our preference on which company has the better quantum computing research group. If we manage to make a small dip in Google usage then perhaps we can persuade them to pour some cash into quantum computing research. They would probably have to hire a Nobel laureate to counter Microsoft’s Fields medalist. A small price to pay for all our quantumy eyeballs.

  7. Hi Dave,
    I enjoy playing devil’s-advocate too sometimes, but your devil’s-arguments could be stronger! Reading your post, one finds you essentially agree with me about D-Wave’s prospects—and while your grounds for skepticism are different, the biggest difference I can see is simply that you feel less rage.
    To respond to some of your points:
    1. A few months ago I attended a talk by Steve Girvin, which gave the strong impression that demonstrating 2-qubit entanglement in superconducting qubits was a new result — I apologize if that impression was wrong. As you well know, I’m no expert on implementations, and don’t have a horse in this race.
    2. “Argument by authority? Really?” Of course D-Wave has been arguing by authority from the beginning (“but we’re working with Harvard and Google!”) — and I’m tired of holding ourselves to a higher standard. 🙂 Seriously, I do think the nerd public deserves to know that most people with relevant expertise remain extremely skeptical of D-Wave’s basic claims, and say in private most of what I say in public. Given how much joy blogging about this topic has brought into my life, I can certainly understand why more prudent QC researchers have chose to remain silent. I still find their behavior cowardly, of course, and hope it changes.
    3. Unlike (it seems) most people in both the pro-QC and anti-QC camps, I don’t see the progress in experimental QC as particularly slow. But maybe that’s because I never expected it to be “fast”! It took more than a hundred years from Babbage to the transistor—and the progress in quantum computing between 1995 and today compares very favorably to (say) the progress in classical computing between 1825 and 1840.
    4. “its not clear to me that these results can be used to rule out polynomial speedups for non-entangled states.” For separable pure states, isn’t it clear that you can simulate classically in linear time, by keeping track of the state of each qubit? What am I missing?
    For separable mixed states, of course, there’s the longstanding open problem of whether one can get a speedup—I worked on that problem my first semester at Berkeley nine years ago! I personally see that as saying more about the limitations of our proof techniques than anything else (for all we know, the question could even have a different answer for qubits vs. qutrits, or for real amplitudes vs. complex ones!).
    In any case, what seems relevant to the D-Wave discussion is that all known quantum algorithms require entanglement—I’m not aware of any evidence that one can get a speedup with separable mixed states. That casts some doubt on their idea of using separable mixed states to solve their customers’ industrial optimization problems.
    5. “It sure appeals to the scientist in everyone to validate every stage of everything you do, but technology development is different than science.” I’ve heard this argument many times: it’s not D-Wave’s job to satisfy skeptics like you; their job is just to build a working device. (Which might be paraphrased as: “as long as the car we’re building will work, who cares if it has no engine?”) I can only respond that they has their job and I has mine. And part of my job, as I see it, is to speak truth to cringeworthy claims, in those situations where ignoring them would seem tantamount to assent.
    6. Dave, I envy your sunny disposition, which is able to see D-Wave’s “black box” nature as all part of the fun. I hope you can see my crankiness as part of the fun as well.

  8. not sure why Neven’s company having been acquired by Google has much relevance
    I guess his group still operates in “Neven Vision” mode without much of interference/control from Google’s management, but you didn’t give any weight to the “fact that they are working with Google” anyway, so never mind…

  9. Dave, I actually agree that D-Wave seems to have improved over the last couple years in certain respects. In particular, it’s no longer that hard to imagine them stumbling into something profitable—presumably, something classical-optimization-related that had nothing to do with quantum—in spite of the multiple gaping holes in their stated business plan (which of course is my main concern). I was alluding to those improvements when I made the comment—which you unfortunately found too Gestapo-like—about many of the scientists they’ve hired doing perfectly reasonable things.

  10. @Geordie: You are right that there are earlier two qubit experiments, such as the early demonstration of a CNOT gate by Nakamura with charge qubits in 2003. However, I am referring to experiments which performed full two-qubit state tomography, such as Steffen et al, allowing for a quantitative measure of entanglement.

  11. 2003 is yes, evidence of entanglement, but it’s not the kind of evidence that would convince Scott (who I use as representative of the evil QC community) of anything 🙂 (I don’t want to disparage the result, as the experiment I’m sure was a tour de force, but by the standard of just revealing energy levels consistent with a certain structure, then aren’t most spectroscopy experiments “evidence of entanglement?”)

  12. Dave: No, not all. But let’s say the following hold: (a) you make the best QM model of a multi-qubit circuit you can; (b) you use this model to predict the allowed energies of the multi-qubit system; (c) you measure the energy levels (using eg spectroscopy) and you find quantitative agreement with the predictions of the QM model; (d) the eigenstates of the QM model are entangled for some realized experimental parameters; (e) the temperature is less than the gap between ground and first excited states. This type of spectroscopy most definitely is evidence of entanglement.

  13. Hi Dave,
    Great post! My earlier comment seems to have gotten lost, but I think this question of what constitutes evidence for entanglement (and/or quantum-ness) is very interesting and not at all clearcut. Spectroscopic evidence (or other indirect measures) may not meet the gold standard of state tomography, but for certain systems (such as the D-Wave design) it may be the best available. Note also that the first such evidence (in any superconducting circuit) I am aware of actually dates back to the 80s, in experiments performed at Saclay.
    My question, though, is the following: is there really a skeptic-proof framework to analyze a possibly-quantum-device-without-directly-verifiable-entanglement? I think the answer is no, but perhaps I should be less skeptical.

  14. It’s evidence that your model is correct (and damn straight I would update my priors about the system exhibiting entanglement) but because you don’t know anything about the eigenvectors from this sort of experiment, it doesn’t “prove” that the system has entanglement. Evidence, yes. Demonstration, no. (Of course the experiment performed in 2003 in superconducting qubits is exactly the kind of step that needs to be made to progress at all in coupling the qubits, so for me, it’s more important for that than for demonstrating anything about entanglement. As you can see from above I frankly care less about demonstrations of entanglement, and more about whether the physics of the device is what you think it is…aka I trust physics.) But I also believe that the spectrum of He also gives evidence of entanglement, no? I mean parahelium is a singlet, right? We build the model, look at the spectrum confirm it is correct, cool the thing down, and wham we’ve got “evidence” of entanglement. In this sense lots of evidence of entanglement has been demonstrated down through the ages.
    Of course, I’ll probably say that tomographic proof is slippery: it requires validation of measurements and because of that its subject to loopholes. I guess I’d come down on the side that says to “prove” entanglment you’ve got to do a Bell experiment (and even then, all you’ve proven is that the system has no local hidden variable theory, which does not really show that you’ve got quantum entanglement.)
    One could waste a lot of time worrying about showing entanglement in a system. It seems better to validate the physics of the device on a coarser level and then stand back and see what happens when you try complex “quantum” experiments.

  15. Apologies all I just noticed a few comments stuck in my spam filter. If you ever see your stuff stuck in the filter, please email me. Matt, I don’t know why the heck it keeps flagging you. Scienceblogs apparently thinks you are a spambot 🙁

  16. Dave: I can attest to significant time spent worrying about that slippery slope, and I think we agree on every step along the way (with your work on the communication cost of Bell inequalities being an important piece of the puzzle, in my opinion). However, helium was definitely a big deal (i.e. Born considered it a crucial test of quantum mechanics altogether). So, as far as evidence for entanglement goes, I think helium ranks above both aluminum and niobium (ignoring indistinguishability, control of subsystems, and all that).

  17. @Scott: The two-qubit entanglement that we demonstrated at Yale was not the first entanglement result in superconducting qubits. UCSB and ETH both published earlier works. However, we did show substantially higher concurrence than anything done before, which combined with long coherence times allowed us run simple two-qubit algorithms.

  18. Over on Shtetl Optimized I have just posted a defense of D-Wave.
    I began writing it as a humorous post … but finished it as a serious post … and in consequence, I find myself agreeing with Dave (Bacon) that D-Wave may well be onto something important.
    This does *not* imply that Dave and I agree on all the technical details, but rather, that we both see merit in this work.
    Congratulations, Geordie … and happy holidays to all! 🙂

  19. Thanks John… although any congratulations for anything we’re doing rightfully belongs to the whole team. This is a unique group of people all of whom are playing a critical role in bringing this new technology into the world.

  20. “in spite of the multiple gaping holes in their stated business plan” — speaking as someone who’s taught PostDoc students with Doctorates in Business Administration, and who gets paid $110.00/hour to write business plans:
    (1) The real value of creating a business plan is not in having the finished product in hand; rather, the value lies in the process of researching and thinking about our business in a systematic way. The act of planning helps us to think things through thoroughly, study and research if we are not sure of the facts, and look at our ideas critically. It takes time now, but avoids costly, perhaps disastrous, mistakes later. As I often say, it has an external use (to show investors), and an internal use (to keep us honest within ourselves).
    (2) Given a choice between good managers with a poor plan, and poor managers with a good plan, the investors will pick the good managers every time – one can always hire MBA students to re-write the plan. But MBA students who know QM, that may be in short supply.

  21. Hmmmm … Rod, are there really only *three* options?
    I’m old enough to remember a decade of enthusiasm for fluidic computers … micro-fluidic devices whose computational state-space was bistable Navier-Stokes fluid flow. As computers, micro-fluidic devices found only niche applications, yet they played a vital role in catalyzing the development of what today are economically key disciplines like computational fluid dynamics and micro-fluidics.
    Mightn’t the science and engineering of quantum computers plausibly develop along the same lines as the science and engineering of fluidic computers?
    The point being, that quantum factoring engines (for example) may conceivably arrive someday, but such engines require so many technological advances (relative to our present capabilities) that it is implausible (IMHO) that factoring engines will be the *first* transformational applications of QIT/QIP technology.
    In which case, a viable “Option D” for AQUA and D-Wave would be to form a strategic alliance, focusing not only on long-term computational goals, but also on shorter-term (and easier) sensing and simulation goals.
    There is an excellent 1989 essay by Alan C. Kay titled Inventing the Future that discusses these points.

  22. Andreas, thank you for that link.
    Does anyone know of a reference that discusses what happens to the adiabatic theorem when Hamiltonian dynamics is pulled-back from (exponentially large-dimension) Hilbert space onto (polynomially large-dimension) Kählerian state-spaces?
    AFAICT (after a quick search) there is a mathematical literature relating to this question (Gromov-Witten invariants?)—for the physical reason that in both cases a symplectic structure is present that ensures the answer is interesting—however that literature is not (at present) developed along lines that obviously relate to the D-Wave/Google framework.
    If there is anyone in the blogosphere who is fluent in all three languages (informatics, geometry, and practical engineering), perhaps they will provide a Rosetta Stone!
    These considerations motivate what I said (at first in jest, then seriously) on Scott’s Shtetl Optimized blog “No-one shall expel us from the mathematical paradise that D-Wave/Google has demonstrated for us!” 🙂

  23. John, I read Kay’s essay long ago…
    I don’t agree with many things about D-Wave, but in some ways they are doing the right things — trying to figure out what it takes to build a real system, building a team and the supporting technologies.
    As to how QC will develop, I’m fond of saying that, as a systems architect, I want to build one and put in the hands of Torvalds, Knuth, and Lampson, and see what comes out. But saying that alone would be a copout; it’s not possible to build a computer without *some* idea of how it will be used. And the most prominent — and challenging — application at the moment is factoring, though in practice it might not be the first economically viable application.

Leave a Reply

Your email address will not be published. Required fields are marked *