So did anyone at MIT go to this talk and care to comment:
Mohammad Amin (D-Wave)
Adiabatic Quantum Computation with Noisy Qubits
Adiabatic quantum computation (AQC) is an attractive model of quantum computation as it may naturally possess some degree of fault tolerance. Nonetheless, any practical quantum circuit is noisy and one must answer important questions regarding what level of noise can be tolerated. Gate model quantum computation relies on three important quantum resources: superposition, entanglement, and phase coherence. In this presentation, I will discuss the role of these three resources and the effect of environment upon them with respect to AQC. I will also show a close relation between open AQC and incoherent tunneling processes as in a double-well potential. At a more microscopic level, I will present a non-Markovian theory for macroscopic resonant tunneling, together with recent experimental results on superconducting flux qubits which demonstrate excellent agreement with the theory and may shed light on the microscopic origin of flux noise in these devices. Finally, I will discuss the effect of low and high frequency noise on practical AQC processors and compare AQC with thermal annealing.
Update (11/21/07): Geordie Rose has put the slides of the talk online at this blog post. I’ll have to look at them while eating Turkey.
If I understood correctly, Mohammed made the interesting and somewhat provocative claim that the system remains quantum even if the T2 time of the individual qubits is much shorter than the total runtime of the algorithm, provided that the noise-induced broadening of the energy levels is small enough that the spectrum remains discrete. The MIT professors gave Mohammed a very thorough grilling. I don’t believe a definitive conclusion was reached by the audience. As I understand it, the crux of the matter was that, in his modelling of noise, Mohammed treated the adiabatic quantum computer as essentially a two level system, ignoring the higher eigenstates. Mohammed did not really explain in his talk why this approximation is justified, instead referencing some of his papers. So now some of us are planning to read these papers in order to determine whether we find the argument convincing.
I wsn’t at that talk, but I was at the SC07 talk by Geordie Rose. Pretty well presented talk. If you weren’t too aware of things, you’d probably be pretty impressed. Of course, missing was any comparison with what you could expect from a classical computer. The questions were pretty pathetic…so much so that Geordie at one point expressed surprise at how easy people were being on him. Wim Van Dam was on the panel, and I was hoping he’d say or ask something, but alas no comments from him were forthcoming.
Geordie Rose posted the slides from the MIT talk on his blog.
Michael: What I actually said in those blog posts (reading comprehension…it’s a wonderful thing) was that I apologized for cutting the ones that are not there now (which were even more inane than the ones I kept). The ones that are there now have been there from day one.
For large-scale AQC the gap will always be smaller than T (and broadening induced by low frequency noise), and while it’s not entirely clear what happens in this regime it’s probably the case that as long as the number of energy levels within T of the ground state scales at worst polynomially with the system size you’re OK.
For the systems we’ve built, the gaps are much larger than the temperature, so this subtlety is irrelevant. The main point is that the classical limit in a system like this is large T not long t, and that T_2 timescales are not relevant for operation of AQCs.
Dave & Co.,
For some fun, see http://superconducting.blogspot.com/
This thread contains a post which Geordie cut from his blog. At the time he acknowledged and apologized for doing so (see the thread), and now claims to have never done so.
Anyway, what caught my attention was that in the old thread Geordie relies on the notion that their computation takes place in the regime where T>>Delta, and he even scoffs at the notion that Delta would ever be larger than T. In Mohammad’s talk, however, he calls significant attention to the idea that their system is “quantum” because Delta>>T.
Can anyone make sense of this apparent discrepancy?
the number of energy levels within T of the ground state scales at worst polynomially with the system size you’re OK.
Which is certainly not generic behavior of NP-complete problems in the regimes were making progress would matter. (I think the replica symmetry breaking arguments pretty much show this defninitively.)
Of course you could counter that even if there is an exponential number of states down there, well, decoherence/heating/quantum tunneling could as equally likely help you as hurt you. But this badly violates the no free lunch conjectures that believing this will pretty much put you in never-never land.
Hi Dave,
Yeah this is an issue where further work is needed. One thing it might be useful to think about is how the density of states near the ground state throughout the evolution is affected by the choice of annealing schedule. This is related to the business of keeping the gap polynomial in the problem size. But keeping the number of energy levels participating in the evolution polynomial is easier than keeping the gap polynomial. Re. what can be expected for DOS for D_i X_i + h_i Z_i + J_ij Z_i Z_j type Hamiltonians, some of the literature I’ve seen claims that the DOS near the GS should be polynomial for some useful instance classes. Do you have a reference for the replica symmetry breaking stuff?
Geordie,
I do find it interesting that in the face of criticism of your TECHNOLOGY you often resort to silly personal attacks. The little quip about reading comprehension is unnecessary and juvenile. Indeed, you only cut some of the posts which were unflattering. I apologize to everyone if my post made it seem like you cut ALL related messages.
However, here’s the sequence of events as I see them. You are criticized by someone (me) for excising posts from your blog. You respond, “Michael: If you think I remove unflattering comments from my blog you obviously haven’t been reading it!” I didn’t say that you cut ALL unflattering posts, but I did suggest you do cut them. Evidence that you cut posts is presented and you then respond, well – I only cut the ones “which were even more inane than the ones I kept.”
It follows a similar style of argument that you often use relating to DWave’s accomplishments.
But also please explain exactly what, technically, is “inane” about those posts. If they are not substantial criticisms from a technical perspective, just explain why, and convince the technical reader that you are correct.
“For large-scale AQC the gap will always be smaller than T (and broadening induced by low frequency noise), and while it’s not entirely clear what happens in this regime it’s probably the case that as long as the number of energy levels within T of the ground state scales at worst polynomially with the system size you’re OK”
Geordie – a substantial criticism of your claims to have built the prototype of a useful AQC rests with the words “while it’s not entirely clear what happens in this regime.” If DWave *thinks* they might be able to build a useful QC, so long as a series of theoretical assumptions are validated, that’s absolutely fine. But your company’s public statements should make this subtlety clear – rather than publicly *suggesting* that you’ve built the first quantum computer and that in the future you’re just going to build bigger ones capable of solving larger problems.