Update: 2/5/07 After some conversations, I’ve realized that there is, I think, a big algorithmic wrench in the endeavor described below. Due to this I’ve added some text to this post in italics to describe what I think this wrench is and why my skeptic meter is getting higher and higher.
I’ve been meaning to blog about this for a while, but haven’t gotten my lazy fingers to do any of the typing: D-wave has announced that they are going to be demoing their 16 qubit machine, called “Orion”, on Feb 13 and Feb 15. Cool pictures here, here, and a pretty dilution fridge here. D-Wave’s homepage has a pretty picture of the two locations their demonstrating at (well they are remotely demonstrating, I guess kind of hard to hall your lab down to California!)
Okay enough pictures! Onto the details. Sixteen qubits! ???
First the details. The system they set up claims to be an adiabatic quantum computer with sixteen qubits. It has a Hamiltonian which is made up of ising interactions plus a single qubit term which is along the direction of the ising interaction and a transverse field. The idea, of course, is to use this for adiabatic quantum algorithms. Finding the ground state of a two dimensional ising model with a magnetic field (where each of the field values of the ising terms of magnetic field terms is a problem instance) is NP-complete. So the idea, I guess is to start with all transverse magnetic field, and start in the ground state of this system, and then drag the system to the new groundstate of the Ising model on your sixteen qubits. Sounds fun. Now one thing to think about is 2 to the 16 is about 65 thousand. So a brute force search of the ground state will most certainly be feasible and beat out the D-wave computer. But who cares, right? I mean the proof of principle is more important! Indeed. And there are many things to think about from what we know about the system….some postivie and some negative.
Okay, since I’m generally a postivie guy, I’ll start out with the negatives. 😉 First of all, I understand, and certainly the D-wave people understand, that we do not think that quantum comptuers can solve NP-complete problems in polynomial time. We do believe however, that they can achieve polynomial speedups over the classical algorithm for brute force searching. So great, the question is whether this quadratic Grover type speedup is useful in practice. (Update: 2/5/07 Okay, so there appears to be a big problem with this, as far as I understand it. So if you have a Grover type search where there is one marked item, and you run a linear extrapolating adiabatic algoithm, you get no speedup. But if you slow down at just the right time, then you can get a quadratic speedup. Basically the idea is to slow down where the algorithm has it’s smallest gap. But now there is a problem: how do you know when to perform such a slowing down (a priori)? Further, how do you know what extrapolation to try? Finally, NP-complete does not mean only on solution, so what do you do with multiple solutions? As far as I know, for a system like the Ising model in a magnetic field in 2D, we have no idea how to realize the quadratic speedup because we cannot tell when to slow down! So color me extremely skeptical about using this sort of setup for quadratic speedups.) Now there are lots of interesting questions here. For example, you need to compare the quadratic speedup to the best known SAT solvers. Does Groverish search really help? (I could be wrong here, but my impression was that it is not known how to beat the best SAT solvers??? Help, experts correct me!) But the real problem I have is just one of PR. All this mention of NP-complete for their adiabatic quantum computer makes it sound distinctly like they are claiming that they can efficiently solve NP-complete problems. Don’t believe me? Well read the comments on the blog post linked above and you’ll see what I mean. Now, I know this is nitpicky, but I think when you’re dealing in a field which is highly highly highly susceptible to hype hype hype, you need to be extra careful. (Update 2/5/07: Indeed, given that we do not know how to even achieve this quadratic speedup for the type of setting this system, I don’t see why they can claim much relationship to quantum computing speedups at all?
Okay, so that’s kind of a low blow, just disagreeing with PR. I’m happy to admit it’s not something that is that important. The real question of course is: will this thing work? Is it a quantum computer? What fidelities do they achieve. Wowa..one thing at a time. (1) It will work. If it didn’t they’d be silly to demo it. And they are deadly serious (well deadly if you are an NP-complete problem 😉 )
Okay (2) Is it a quantum computer? First comment: adiabatic quantum computation (error free) is equivalent to the quantum circuit model (error free). So adiabatic quantum computers (error free) are quantum computers (error free) in the most traditional sense. If I were going to sell Orion to my investors, I’d sell this point over and over and over again. Of course, for those of you who read the parenthesis, you’ll know what I’m going to say next. We don’t know how to turn a system like the one they claim, with adaibatic controls only into a fault-tolerant machine. We have a few ideas (notably the work of Jordan et al which is very similar to the work of Kitaev and others have done…also early work by Childs et al.), but I don’t think we know how to build a fault-tolerant all-adiabatic quantum computer…yet! So should this matter? Well let me quote from a paper intro I’m writing right now
It is tempting to consider all physical systems as being information processing devices which manipulate through their dynamics the information labelled by the system’s different states. This view of computation, however, is severely naive because it leaves out consideration of whether it is practically possible to robustly store and fault-tolerantly manipulate the information encoded into the physical system in the presence of noise and imprecise control.
To me you do not have a computer until you have a fault-tolerant computer. So to the question of whether they have a quantum computer we can say: adiabatic quantum computers are quantum computers if we can figure out how to make them fault-tolerant.
Okay what about (3) the fidelities. This is where the real questions come in. At what temperature are they running their adiabatic quantum computer and how do these temperatures compare to the coupling strengths they have. More importantly, however, how do these temperatures compare to the gap in the adiabatic quantum algorithm (this will be smaller than the coupling terms!) This latter problem is one of the big problems for error correction in adiabatic quantum systems. Since the gaps for adiabatic quantum algorithms scale inversely as a polynomial in the problem size, you need to have your temperature scale in a similar way. Now of course, this isn’t totally true, becuase it is possible for a temperature bigger than the gap to be USEFUL in the adiabatic quantum algorithm (a sort of hybrid adiabatic-anealing computer.) However our understanding of this type of algorithm is far from complete and understanding how it, say, affects the quantum circuit simulations of quantum computers on an adiabatic machine is not, as far as I know, known.
Another thing to worry about in fidelities are the couplings themselves. How well are they controlled? If I want a certain energy coupling in my adiabatic quantum algorithm, how certain are you that this coupling is the correct one (say Ising ZZ) or is the correct strength. If that control isn’t strong enough you can cause a phase transition from the ground state you want to one you don’t want. Very bad.
Finally you might worry about the fidelities of two other components, the preparation and the readout.
Of course all these questions about fidelity are ones which hopefully dwave will settle by either publishing about their system, or convincing us all that they are doing legit things by building quantum computers that behave as expected (and eventually solve a SAT problem the NSA can’t solve…assuming the NSA doesn’t have a quantum computer already, of course! See there I go, mentioning SAT instances in the same sentences as quantum computation when I really should have mentioned factoring!)
Finally let me just say that I think the fact that D-wave is putting themselves out there and trying to push things is great. I worry about if they fail what this does for future quantum computing endeavors, but I love underdogs, so I have great sympathy for trying. (Update 2/5/07: But what should we make of the fact that it seems to me the algorithmic case for their method seems, at best, to be in grave doubt? For me, this is dubious territory to tread. While I won’t cry doom, I will cry that the case is very shakey.) I also love the fact that they are going for breadth versus depth. I want more qubits damnit, and if a crazy Canadian company is going to be the one to help me get those qubits, then I’m as happy as can be! (Update 2/5/07: I still like the fact that they are going with a lot of very very noisy qubits. In my mind this makes the endeavor like NMR+. Whether this plus is big enough to fashion a real quantum computer is the billion dollar question.)
Hi Dave:
This is an excellent overview. A couple of points of clarification:
1. I absolutely want to make clear that it’s the “Groverish” result we’re focused on using and not anything exponential (yet). If I gave the impression otherwise in any of my stuff I apologize.
2. Re. competing with classical algorithms: this is a great point. A square root speed-up by itself is pretty useless for most problems (for example the best classical algorithm for max ind set scales something like 2^{0.2N}. A naive application of unstructured search (2^{0.5N}) loses.) What we’re doing is trying to combine the best classical algorithms with hardware calls to invent hybrid algorithms that beat the best known classical algorithms (even if it’s by a small change in the number in the exponent). You may have noticed that over the past year or so we’ve been hiring a bunch of discrete optimization people….that’s what they’re working on. It’s a fascinating problem.
3. NSA notwithstanding, people are usually OK with approximate answers, and there is some theory behind the idea that AQC-type machines have advantages in getting good approximate answers (see for example quant-ph/0701161).
4. Last point on the “fidelities” questions. These are all excellent questions. Putting aside the details for a moment, there’s a very important concept that ties alot of this together and I’d like to just finish with a brief description of the idea.
Given a “system that’s globally phase coherent over the entire calculation”, you can show that unstructured search is quadratically sped up over a classical computer. This is an optimality result for scaling. You just simply can’t do any better than O(sqrt(N)). We all agree on this.
What I don’t agree with is automatically assuming that the argument holds IN THE OTHER DIRECTION. It does not logically follow from the above that any machine with O(sqrt(N)) scaling on unstructured search is a “system that’s globally phase coherent over the entire calculation”. This may or may not be true.
While the jury is still not in, our studies of these systems seem to indicate that AQCs, in the presence of thermal and spin bath environments, can still provide O(sqrt(N)) scaling even though the central QC system is definitely NOT a “system that’s globally phase coherent over the entire calculation”.
I suspect that if this turns out to be correct that this is an extremely important conceptual point.
Thanks, Dave! Now I don’t have to blog about this.
The best SAT solvers work by variations on local search — and no, to my knowledge we don’t have a quantum algorithm that provably outperforms them. (The adiabatic algorithm might outperform them if only we could analyze it…)
There are adiabatic theorems for classical systems too. Recalling that it can be mighty subtle (meaning, NP-hard) to distinguish classical correlations from Bell-type quantum correlations, perhaps it is equally hard to distinguish classical adiabatic calculations from quantum adiabatic calculations?
What do the experts say about this?
Dave:
Yeah it’s a very interesting result. Tough to know at this stage whether it’s correct. Note that strictly speaking with an environment the evolution is not adiabatic anymore (at least where it matters at the anticrossings).
While the details are tough it’s obviously true that for any real “AQC” the minimum gaps will always be much smaller than temperature for hard problems–it’s the right limit to think about these things.
Hey Geordie,
I like you last point a lot. From my point of view, a quantum computer is never fully coherent, since error correction is always going on!
But what I find fascinating is the idea that maybe if you perform an adiabatic quantum algorithm at a temperature below the initial and final gap of the system, but not necessarily below the minimum gap (or whatever criteria you want to use) during the computation, that even then the adiabatic algorithm will succeed. That’s a very cool form of error correction. Not sure if I believe it yet, but it doesn’t seem impossible!
The adiabatic theorem as many of us learned it from Messiah’s textbook is a statement about closed quantum systems. Therefore the idea that we can just add an environment and still use the same notion of adiabaticity is a bit naive. We have to be careful about just accepting the same old version of the adiabatic theorem when the system-bath interaction is turned on. Instead the adiabatic theorem needs to be rederived in a fully self-consistent manner than includes the bath. Recently there have been a number of studies of this problem. Thunström et al., in “Adiabatic approximation for weakly open systems”, derive a new adiabaticity condition in the weak coupling limit, using a Hamiltonian approach. Marcelo Sarandy and I, in ” Adiabatic approximation in open quantum systems”, used the machinery of convolutionless master equations to derive a new adiabatic condition for a large class of open systems (including non-Markovian baths). Perhaps the most important message, also confirmed by the later Thunström et al. study, is that generically adiabaticity will break down in an open system after some finite time, even if the evolution started out being adiabatic. An intuitive way to understand this is that the presence of the bath broadens the system energy levels, so that where there was a gap if the system is treated as closed, that gap can close if the broadening is large enough. This is generally bad news for AQC, and we studied the consequences in “Adiabatic Quantum Computation in Open Systems”. In that paper we also showed that there is a way out, using Siu’s idea of unitary interpolation: essentially one can keep the gap constant using an appropriate interpolation along the adiabatic evolution, which is not the usual linear interpolation between the initial and final Hamiltonians. However, this requires many-body interactions and furthermore the bath must be Markovian! Thus the Jordan et al. and Dave’s ideas about QEC for AQC as a means to stabilize the gap are what we need. But again, these ideas should be formulated in a consistent open systems setting, without just importing the adiabatic theorem in its closed system form. This is also partly a comment on Geordie’s idea of using a “system that is not globally phase coherent over the entire calculationâ€. I take it that this refers to AQC with an open system, and I wonder if the conclusions drawn make use of the adiabatic theorem for open or closed systems? Another important finding from the studies of the adiabatic theorem for open systems is that the usual adiabaticity time condition, “time much greater than inverse of the minimum gap” (I’m referring to the version by Schaller et al., “General error estimate for adiabatic quantum computing”), is replaced by a new time condition which involves the spectral gaps of the superoperator generating the open system evolution.
Dave:
Re. the concern about using a local schedule vs. a global schedule: there are grounds for believing that thermal environments set the timescale for how fast you can sweep, not the size of the minimum gap; see for example the TAQC paper where we only use global schedules. You don’t have to know where the anticrossing(s) are.
Re. the gap much less than T: Certainly this is the limit we’re in for hard problems. For example in the (contrived) adiabatic grover search problem the gap shrinks exponentially with problem size. Temperature is fixed. However one of our main contentions is that gap much less than T is not the end of the world and you can in fact get better scaling than conventional computers in this limit (related to the point above).
“While the details are tough it’s obviously true that for any real “AQC†the minimum gaps will always be much smaller than temperature for hard problems”
I don’t think this is obvious at all…indeed I think the point of providing error correction for adiabatic systems is exactly to get rid of this effect (the important thing is what gap the enviornment sees, not what the actual gap is!)
can some preliminary runs serve to detect the location anticrossings?
I meant: location of anticrossings
I don’t understand your new italicized remarks. Why don’t we know when to slow down? For Grover search with a single marked item, e.g., you slow down in the middle (if initial and final Hamiltonians are normalized properly). I’m not sure why multiple solutions presents any more difficulties than multiple solutions for Grover search in the circuit model. For structured search, I have no idea what might happen, but a 2^{0.1N} quantum algorithm for max ind set (versus 2^{.2N} classically) would be interesting.
quantum: For multiple solutions in quantum amplitude amplification (i.e. in the circuit model) we use a phase estimation algorithm to figure out how long to rotate…with an adiabatic algorithm it is much less obvious when to slow down.
Further, when there is one marked item, say, H=-|x> and an orthogonal state |phi’> (which has most amplitude in |x>) and then the rotation starting from |phi> is just in terms of these states….for a general Hamiltonian this technique fails and I don’t think anyone knows how long to run the adiabatic algorithm for, much less when to slow down (please someone correct me if I am wrong about this.)
Well, for example: Run the algorithm as if there is just one solution a constant number of times, then as if there are two solutions, then four, etc. The overhead is logarithmic. For a general Hamiltonian, yes the problem will be hard.
The more pedantic problem of SC QC calculations is one of verifiability of both the device’s complete function, and the calculations to required accuracy.
Imagine breaking or operating an encryption to only some approximation of the required accuracy? What becomes an acceptable error? Assessed by what means? By an enemy’s own QC??? doing the same encryption key calculation? (one example)
If for some hypothetical reason there is not practical way to test the Dwave arrays in larger sizes (to requred calculation tolerances) this becomes amusing.
If this proves to be the case, the only way I can think of to quickly have some assurance in the calculations accuracy, is to have multiple units calculate the same problem and HOPE that the “voted” results are identical to the required tolerance.
If not, things get interesting as if a mere voting type scheme for assurance of the QC result is indeterminate, it would prove rather amusing if stray artifacts, in say fabrication or material morphology or small non uniform directional effects in thin film coating deposition fluxes provided subtle sources of irreproducibility in the required larger arrays of JJ elements.
Interesting how this will transpire. Fascinating really. I think (hope)these musings will prove irrelevant, but maybe not.
And then there is always the teraflop 80 core chip
http://www.theinquirer.net/print.aspx?article=37572&print=1
The benchmarks for single chip teraflop power will change pretty dramatically and very shortly.
Remember HT Kung’s CMU Systolic Processors back in the MID 80?
Hello everybody, my name is Damion, and I’m glad to join your conmunity,
and wish to assit as far as possible.
Mark,
I think those benchmarks are outdated.