Adiabaticly Failing for Random Instances

An interesting paper on the arXiv’s today, arXiv:0908.2782, “Adiabatic quantum optimization fails for random instances of NP-complete problems” by Boris Altshuler, Hari Krovi, and Jeremie Roland. Trouble for D-wave?

Adiabatic quantum algorithms are a set of techniques for designing quantum algorithms based upon the adiabatic theorem in quantum physics. The basic idea is as follows: (1) take the problem you are trying to solve and convert it to a Hamiltonian whose ground state (you could use other energy eigenstates, but in general this is not done) is the solution to this problem, (2) Prepare the system in the ground state of some easily constructed Hamiltonian which has the same number of qubits (or qudits) as the Hamiltonian from (1), and finally (3) turn off the Hamiltonian from (2) while turning on the Hamiltonian from (1). The adiabatic theorem then tells you that if you go slow enough in this dragging of one Hamiltonian to another, then you will end up in the ground state of the Hamiltonian from (1), and hence you can solve your problem by reading out the answer after this evolution. How slowly you have to go depends on a few things, but most importantly it depends on the minimum eigenvalue gap between the ground state and the first excited state of the system during the evolution. If this minimum gap scales inversely as a polynomial in the problem size, then the adiabatic theorem will guarantee that the algorithm succeeds with high probability running in a time polynomial in the size of the problem. I.e. minimum eigenvalue gap = inverse polynomial is good / minimum eigenvalue gap = inverse exponential is bad.
Adiabatic optimization algorithms are a distinct class of adiabatic algorithms in which the final Hamiltonian you are dealing with is really some “classical” optimization problem (as opposed to more general adiabatic quantum algorithms, which include things like a universal quantum computing simulator and thus there exists an adiabatic quantum algorithm for, say, factoring a number.) There is a long history of studying these problems and at times there has been optimism that adiabatic quantum optimizations might be able to efficiently solve NP-complete problems. This would, of course, be an amazing computationl breakthrough, so there has also been a lot of skepticism about this result. Beyond mere skepticism there has also been work that shows that for particular instances of the adiabatic quantum optimization, and for particular was to carry out the adiabatic evolution, the algorithm will take an exponentially long time to finish. One example of such work is that of van Dam, Mosca, and Vazirani (arXiv:quant-ph/0206033) who explicit examples where a particular scheme for adiabatic quantum optimization requires exponential time. Updated 9/11/09 text follows: The example used in this paper uses a cost function (target Hamiltonian) which is very non-local and tailored to defeat the algorithm. A more local, i.e. 3-SAT instance of this version of result was produced by van Dam and Vazirani, but never published. However this negative result was followed by a positive result: Farhi, Goldstone and Gutman showed in quant-ph/0208135 that if one selects a randomly varying path then the local example of van Dam and Vazirani does not pose a problem.
Given these negative and positive results, the picture is not clear. For example the special cases that make the algorithm fail needed to be hacked to get into a form where they were solved efficiently. This indicates to me, at least, a dangerous slope: anyone attempting to prove that a particular instance is hard will end up in a situation where the authors just modify the algorithm to defeat this instance. But of course, it could be that the randomly varying path approach of Farhi, Goldstone, and Gutman is the end of the story. Of course whatever the answer is as to what the gap is for particular instances, one might also argue that the the specially considered instances are not the ones that one encounters in the real world, and thus these cases are not important from a practical perspective. End updated text 9/11/09. What the paper above (arXiv:0908.2782) does is shows that, in some sense, this is not true. Now because we do not have a model of what a “real world’ instance of an NP-complete problem is from a mathematical perspective, what the authors substitute is a “random instance” of the problem. Whether this is a good proxy is a subject for great debate, but that withstanding, the authors plow on ahead. And what they show is that for random instances of an NP-complete problem called exact cover 3, the adiabatic optimization algorithm will require an exponentially long amount of time to solve the instances.
What does this all mean? Well it is another nail in the coffin for those who believe that adiabatic quantum optimization algorithms can efficiently solve NP-complete problems. Is it the end of the story. Certainly not. Why? Well there are all sorts of interesting questions: the eigenvalue gap estimated by the author scales as O(n-1/8). I don’t know enough about the classically best algorithms for exact cover 3, but it seems to me that if this is the real scaling then this would imply a polynomially sped up quantum algorithm for this problem. Also its still not at all clear how the initial Hamiltonian and subsequent path it takes to get to the final Hamiltonian influences adiabatic quantum algorithms. Another interesting point is that the eigenvalue gap the authors can prove is exponentially shrinking only starts to show up, by their estimates, when n is above 86000 (maybe lower, but their argument is heuristic for the lower value.) Maybe it is true that adiabatic quantum algorithms are better than classical algorithms up to a certain instance size (a failure of big-O notation, so to speak.)
What does all this mean for D-wave, who is trying to build an adiabatic quantum optimizer? Well it’s uncertain. They certainly won’t be happy to have another negative theoretical result to bash their heads up against. On the other hand, the real question, of course, is whether when you put the real numbers in, does their adiabatic machine outperform the best classical algorithms. As a betting man, however, I think some priors just got shifted in a negative direction.
Update 9/11/09: I have added text above to clarify the story a bit after conversations with Ed Farhi. Also I would recommend anyone interested in this topic to check out the comments below which illuminate the situation even more.

11 Replies to “Adiabaticly Failing for Random Instances”

  1. Their claim that their results means AQO fails is a peculiar way to interpret these results. If I understand it correctly the effect they study happens near the end of the evolution (very near t=t_f). At this point the global optimum and the states it is crossing with are exponentially close. This means that even if you leave the global optimum, the state you end up with is exponentially close in energy to the global minimum, isn’t this correct? (!)
    We can’t yet build 86,000 qubit chips with 3.7 billion couplers (ie fully connected). If this is in fact physically possible, and their results are correct, this would mean we could run them using their approach and get results exponentially close in energy to the ground state with something approaching certainty. This would be the most successful optimization algorithm ever devised, by a long shot, and probably violate some complexity theory approximation theorem.
    If this result is correct (which of course it might not be) it’s a significant positive development for AQO.

  2. Hi Geordie, sorry for the slow response.
    The result only seems to me to be relevant to the question of solving NP decision problems, where I think it is negative. It’s definitely true that the way in which they show that it fails for the decision problem doesn’t have anything to say about approximation algorithms, but this doesn’t imply that it says something positive about them either! i.e. just because there is a gap at the end of the adiabatic computation to states which you’d be happy to find, this doesn’t mean that there aren’t other obstacles (small gaps) during the computation which muck this up. But definitely you are correct that the obstacle for the decision version does not imply that the adiabatic algorithm doesn’t work as a good approximation algorithm.
    (I think if exact cover had a log n approximation algorithm then NP would have polynomial time algorithms.)

  3. Also, even if the gap becomes exponentially small, in some cases this can still mean that you outperform classical algorithms. I don’t know the literature for exact cover, but I know that on random SAT problems there are different phases, and in some phases the belief propagation solvers do very well but in other phases they do poorly even on random problems. And simulated annealing/parallel tempering/whatever can take exponentially long all over the place. So, it can be a question of one exponential vs another.
    Seems to me that this is an interesting new physics approach to studying these problems, but still a lot remains to be understood.

  4. Thanks for these comments, there are a few things we would like to clarify and expand on. In our paper, we address the question whether AQO can solve random instances of Exact-Cover, so as usual we focused on the minimum gap between the ground state and the first excited state. It is perhaps interesting to note that the gap is not only exponentially small, but actually super-exponentially small. This means that the time required by the AQO algorithm would be even worse than classical SAT solvers.
    Regarding the hope that our result might say something positive about approximation algorithms, again we hate to rain on the parade. But as Dave already pointed out, our result does not imply the existence of approximation algorithms. Quite to the contrary: if the ground state is subject to other small gaps earlier in the algorithm, and if there are similar level crossings between excited states too, then the energy of the actual final state will be carried further away from the solution. It would probably scale as some power of the problem size N.
    Regarding the concrete estimate of qubits we gave such that AQO fails: of course this is a rather loose estimate that we used for our current simulations to show scaling of the gap analytically. We expect that this number can be reduced significantly and would not be surprised if it can be reduced to a few hundreds of qubits. Actually, numerical simulations already indicate that this is the case. This should be compared with the fact that Exact-Cover (along with many other NPC optimization problems) for hundreds of variables does not present any problem for the many classical solvers that are available. Those really start to run into difficulties once the number of variables gets into the thousands (this is for random instances, while some industrial instances with over a million of variables may also be solved due to additional structure).

  5. Thanks for these comments, there are a few things we would like to clarify and expand on. In our paper, we address the question whether AQO can solve random instances of Exact-Cover, so as usual we focused on the minimum gap between the ground state and the first excited state. It is perhaps interesting to note that the gap is not only exponentially small, but actually super-exponentially small. This means that the time required by the AQO algorithm would be even worse than classical SAT solvers.
    Regarding the hope that our result might say something positive about approximation algorithms, again we hate to rain on the parade. But as Dave already pointed out, our result does not imply the existence of approximation algorithms. Quite to the contrary: if the ground state is subject to other small gaps earlier in the algorithm, and if there are similar level crossings between excited states too, then the energy of the actual final state will be carried further away from the solution. It would probably scale as some power of the problem size N.
    Regarding the concrete estimate of qubits we gave such that AQO fails: of course this is a rather loose estimate that we used for our current simulations to show scaling of the gap analytically. We expect that this number can be reduced significantly and would not be surprised if it can be reduced to a few hundreds of qubits. Actually, numerical simulations already indicate that this is the case. This should be compared with the fact that Exact-Cover (along with many other NPC optimization problems) for hundreds of variables does not present any problem for the many classical solvers that are available. Those really start to run into difficulties once the number of variables gets into the thousands (this is for random instances, while some industrial instances with over a million of variables may also be solved due to additional structure).

  6. Dear Boris,
    I enjoyed a lot reading your paper and I think it is illuminating and very well written. What you discuss is actually a special case of the first order quantum phase transition (i.e., crossing of two localized states) that Vicky Choi and I discussed in arXiv:0904.1387. In a more general case, 2nd order perturbation is enough to create such anticrossings, while for your EC3 problem a 4th order correction is needed, as you also mentioned. While I agree with your calculations and conclusion, I question the generality of your result. Your final conclusion heavily depends on the fact that B_i and E_{ij} are independent of N. This is indeed a special property of EC3 or 3SAT for fixed M/N ratio which makes the average number of the edges per node in the corresponding graph bounded as N grows. However, I don’t think this is a generic feature of all NP-complete problems (although I am not a computer scientist therefore not qualified to make such a statement with certainty – May be Dave should comment on this). One can envision problems in which the number of edges per node also grow with N (e.g., random MIS in which the total number of edges grows faster than N). In that case, B_i and E_{ij} will increase with N and to some extent they may cancel the effect of the N that comes out of the sum in, e.g., your Eq.(45) or the corresponding second order perturbation version of it. For those problems \lambda may approach zero much slower with N or may not approach zero at all, eliminating the danger of trapping in such local minima.

  7. I agree that the paper’s a neat idea, but it looks like Mohammad’s right. If you take a fully-connected Ising spin-glass with normally-distributed or uniform[-a,a]-distributed h’s and J’s, arguably a *much* more difficult NP-hard problem, doing the 2nd-order perturbation seems to yield that lambda_c is proportional to sqrt(Delta E). To avoid arguments that Delta E decreases with N, suppose that the coefficients are integers, (still very much an NP-hard problem), so Delta E remains at least 1.
    There are a few other highly-non-obvious, even counter-intuitive, things that the paper missed, but they take a bit of explanation and raise some significant questions that should be addressed, so I’ll try writing them up instead of posting a ton of huge equations here. 😀

  8. Mohammad, Niel: I don’t quite understand your arguments about the structure of the problem being considered (B_i and E_{ij} being independent of N)…at least they don’t make sense to my CS molded mind. The authors have definitely shown (as was done in arXiv:0904.1387 which I completely forgot about…doh, sorry) that there is a probability distribution under which the decision version of the problem is hard. Might there be probability distributions for which it is not hard? Certainly (especially the ones that reduce to 2SAT :)). But then you have to argue why the distribution they use is the wrong. I think what both of you are arguing is that their argument won’t work for other probability distributions. Fine, but it seems to me that this is just a slippery slope: if there is one thing the study of NP-complete problems teaches us it is that the structure of the actual NP language you are working with, within the class of NP-complete problems (which are not unstructured), isn’t important for the hardness of these problems (for the decision version of the problem.) This is the reason, I think, why most computer scientists are now very skeptical of adiabatic quantum algorithms.

  9. What are they skeptical about? Do you mean that they are skeptical about whether adiabatic quantum algorithms can solve NP-complete problems efficiently (i.e. in polynomial time)?
    Yes. Skeptical about whether adiabatic quantum algorithms can solve NP-complete problems efficiently.
    Aharonov et al. (in quant-ph/0405098) has proved that AQC is equivalent to the standard quantum computation. So do you mean that they are skeptical of *quantum algorithms* in general, and not just adiabatic algorithms?
    I’m also skeptical that quantum algorithms in any model can efficiently solve NP-complete problems efficiently.

  10. The point I was making was not of probability distributions, though I could have added that. It’s that the particular formulation of the particular problem they chose happens to have the property they describe (assuming you believe the unjustified assumption of a zero mean on page 17). They could formulate the exact same problem in a different way (e.g. use power 4 in the cost function instead of power 2, or weight each clause differently) and if there’s even one formulation that doesn’t have that property, their argument isn’t of any relevance. Moreover, even if there’s one formulation of one NP-hard problem that has no instance with this property, the argument also has no relevance. Heck, if there’s a formulation of any APX-complete problem with no anti-crossings with states within a factor (1-epsilon) of the optimum in any instance, it’s also not relevant.
    The paper didn’t look at any reformulations of the problem, they didn’t look at any other problems, and there are questionable assumptions in the analysis they did. Frankly, I’m quite skeptical of papers that make sweeping claims in their titles and then fail to look at more than a paint fleck of the big picture. To insult the reader further, they chose not to cite where the analysis technique really came from: http://arxiv.org/abs/0904.1387
    It’s just a minor side note that performance of random distributions of NP-hard problems are a poor metric for the performance of practical NP-hard problems.

  11. Dave,
    Would you please clarify your last statement:
    “This is the reason, I think, why most computer scientists are now very skeptical of adiabatic quantum algorithms.”
    What are they skeptical about? Do you mean that they are skeptical about whether adiabatic quantum algorithms can solve NP-complete problems efficiently (i.e. in polynomial time)?
    Aharonov et al. (in quant-ph/0405098) has proved that AQC is equivalent to the standard quantum computation. So do you mean that they are skeptical of *quantum algorithms* in general, and not just adiabatic algorithms?

Leave a Reply

Your email address will not be published. Required fields are marked *