Okay, so talking about D-wave is certainly not good for my Zen-Budha nature. Not good at all! So let’s talk about something related to D-wave but only tangentially so that I can get that Zen-Budha nature back.
One issue that comes up a lot in discussion about D-wave is simulated annealing. Wikipedia does a nice job defining simulating annealing. The basic idea is to mimic cooling in a physical system. Suppose you have a space of different configurations and some sort of energy function on this space. You’re goal is to find the lowest energy configuration because then you will be the coolest physicist on the block…and of course at the same time you might be solving some very hard computational problem. At a given step in the simulated annealing algorithm, you are in a given configuration and then you take a step in a neighborhood of this configuration. The probability of your particular change depends on a parameter T which is roughly the temperature. The simplest update rule is something like Metropolis Monte Carlo: if the energy is decreased, then always accept the change but if the energy is increased, accept this change with a probability [tex]$e^{-1/T}[/tex]. Of course more complex update rules are possible, but you get the idea. Then as a function of time you decrease the temperature and roughly freeze out the system. Hopefully when the temperature has reached zero you will either be at the global minimum energy or will have seen the global optimal energy. Okay, so a cool strategy for finding a global optimum. (Okay, okay, that will be my last temperature joke!)
But if there is one thing you learn when you begin to learn about classical algorithms it is that oftentimes what is good for solving one problem can be bad for solving other problems. Take, for example the maximum matching problem. A matching of a graph is a set of edges which do not share common vertices. The maximum matching problem is to find a matching with the largest number of edges. There is a classical algorithm for finding the maximum matching which runs in a time proportional to the number of edges times the square root of the number of vertices (due to Micali and V. Vazirani). In other words the running time of this algorithm is polynomial.
Now what happens if we run simulated annealing on this problem? Well of course simulated annealing is a meta-algorithm…there is a lot I haven’t specified. But just take a fairly generic and straightforward way of implementing simulated annealing for this problem. Then it was shown by Sasaki and Hajek that there exist bad instances for this approach. In other words simulated annealing fails to find the maximum matching in a time polynomial in the size of the graph. Doh! On the other hand, simulated annealing does find a “close” maximum matching for suitably small temperatures. This later fact is interesting, but what is more interesting to me is the fact that the simulated annealing algorithm fails where a classical polynomial time algorithm is known to exist.
So why does this all matter in the big scheme of things? Because one of the claims made by those who believe in D-wave is that their quantum system will outperform classical simulated annealing. But what does it mean to outperform classical simulated annealing if classical simulated annealing itself isn’t even the best classical algorithm? There is no simple answer to this question as far as I know, but it does give me pause everytime I see simulated annealing compared to some quantum version of this annealing.
Friday Rant Blogging
Over at Roseblog, Geordie gets on a high horse and tells us quantum theorists what we haven’t read enough because we haven’t read papers like this. Which, I guess makes me not a quantum theorist! No, people like me, we never read outside of our narrow focus or try to learn as much about a field we work in as possible. We’ve never heard of this Aeppli guy either….what even Scott Aaronson, the quantum theorist of theorists, sites cites his work? That’s just impossible!
Oh, and by the way, the paper sited shows a speedup of a factor of thirty with their “quantum annealing” strategy over their “classical annealing” strategy. See substantial comments below by Bill Kaminsky for why the above sentence is wrong.
And yes, I’m grumpy.
Julia!
Julia Kempe, quantum computing theorist extraordinaire, written up in this nice article on Sciencecareers.org. Hmmm, what is Julia drinking in the second picture?
128 Bit Madness
Get your own 128-bit number here!
Speaking of which, what would happen if I designed a system which used a digital rights management system with my own 128-bit number and used this system to spread the HD-DVD 128-bit number? Then they would have had to cracked your system to know that you had cracked their system… Does the legal system just blow up into an infinite loop at this point?
Speaking of which, how many bits of the number do you have to publish? What if I leave off the last five bits? What about if I create a superposition of numbers such that with a one in one hundred chance you get the right 128-bit string? Is publishing such a quantum state a violation of the DMCA?
0, 1, superposition
Doh, quantum computers are tristate logic devices?
In classical computer science, bits — or binary digits — hold data encoded as ones and zeros. In quantum computing, data is measured in qubits, or quantum bits. As such, a qubit can have three possible states — one, zero or a “superposition” of one and zero.
I mean technically it is correct, I guess (ignorning mixed states), but doesn’t this make it sound like qubits are just three state classical systems? Or is my nitpicky-meter too high?
Shor Math
Feature article at the AMS on The Mathematics Behind Quantum Computing: “This column and next month’s will present a description of Shor’s Factorization Algorithm in terms appropriate for a general mathematical audience…”
$150 Million
Muchos dollars to fund Research Centre of Excellence on Quantum Information Science and Technology at the National University of Singapore, lah.
Professor Artur Ekert, Director, Research Centre of Excellence, said: “At the moment, you can buy quantum cryptography systems, you can use it in some simple applications but somehow you have to trust companies that sell it to you or you have to test the equipment.
“The kind of quantum cryptography we develop here is probably the most sophisticated that is not available in any other countries so we have some ideas to make it so secure that you don’t even have to trust equipment that you could buy from a vendor.”
Um, First?
This press release is a horrible bastardization of this cool Science article describing the coherent controlled coupling of flux based superconducting qubits near their optimal bias points. Does everything interesting in quantum computing have to come along with an unreasonable press release? Dude, somebody needs to become the universal vetter for these damn things.
Geordie incites Scott‘s hypometer, but not nearly to the record setting levels of Orion times.
Beware of Roaming Complexity Classes?
Ack am I missing something or is the Complexity Zoo missing most of its inhabitants? Crap I don’t want to get run over by a loose AM! Update 5/2, 2:50pm Whew. That’s more like it. All the complexity classes now seem to be back in the zoo. Most disturbing, of course, was the missing ALL beast. End Update
On a more amusing front, this page on the qwiki is depressing, no?:
Articles in category “Faculty Position”
There are 0 articles in this category.
Retrieved from “http://qwiki.caltech.edu/wiki/Category:Faculty_Position”
AQIS 2007, Kyoto, Japan
Come to Japan! See who can eat more sushi, Dave or Scott?
Asian Conference on Quantum Information Science
September 3-6, 2007
Shiran Kaikan, Kyoto University
Webpage http://qc.naist.jp/aqis07. Poster here.