Stuff to read while you wait around for finals and the Christmas holidays:
- Via alea one of the odder invocations of NP-completeness: Rowing and the Same-Sum Problem Have Their Moments
- An update on the status of US science funding for the next budget year at Computing Research Policy Blog
- An interesting paper is out on Quantum Metropolis Sampling. The key insight (slaps head) in getting a Metropolis like algorithm to work is not to make a full energy measurement but to only reveal a small bit of the information relevant for whether to accept or reject the move. I spent many an hour trying to figure out how to get around the energy measurement step, so I personally really like this paper. Of course, calling this an “efficient” quantum algorithm seems kind of strange to me because I’d reserve that word for algorithms that converge in polynomial time, and when the spectral gap of the map is exponentially small this sampling will take an exponential amount of time.
- A discussion of computational complexity and economics. See especially the comment by JK.
It might be possible to remove the energy measurement. If |pi> is a purification of rho, then one might be able to use Szegedy operators to produce a Markov chain with |pi> as a stationary state
Sorry, I wasn’t too clear. I meant a classical Markov chain M with pi(x) as stationary state, Szegedy operator W(M), and purification of rho given by |sqrt{pi}> =sum_x sqrt{pi(x)}|x>
About the efficiency of the Metropolis algorithm: the same criticism can be directed towards any classical Monte Carlo algorithm; the magic of Monte Carlo however is that in practice the gap scales as an inverse polynomial in the system size for almost any application where you expect thermalization, i.e. in all cases where it makes sense to talk about Gibbs states. BTW, the Metropolis algorithm is the first on a list of the “10 most successful algorithms of the 20th century” .
Hi Frank!
I guess what I would say is that the algorithm is efficient “in the size of the gap”, but just calling it efficient seems a bit loose to me. But that’s what I get for being in a CS department. The classical markov chain literature also sometimes abuses this language…especially simulated annealing type algorithms. I guess it’s just a pet peeve of mine.
It’s also not clear to me that your claim about anywhere we expect thermalization your algorithm will work: this would require showing that all realistic physical thermalization processes have convergences at least as fast as your algorithm and I don’t see this in the paper. (Though in practice I bet you’re right.) (Conversely, of course, there are metropolis like algorithms which thermalize even when the physical model would not… cluster algorithms, for instance, can thermalize below the critical temperature of a ferromagnet in polynomial time while in physical systems this takes exponential time.)
The Temme et al paper says:
” When the simulated system is classical, quantum computing can quadratically speed-up the preparation of a Gibbs state
[22, 23];…”
The problem with the Temme et al algorithm is that it is already obsolete. It’s wrong to say that the sampling algorithms that use Szegedy operators only work “when the simulated system is classical”, whatever that means! The Szegedy type algorithms output a PURE state |sqrt{pi}>= sum_x sqrt{pi(x)}|x>, where pi(x) is the stationary state of a Markov chain. If |sqrt{pi}> is the purification of the Gibbs state rho= exp(-beta H)/Z, then the Szegedy type algorithms can do the same types of averages as the Temme et al algorithm, except the Szegedy type algorithms do it in O(1/sqrt{delta}), whereas the Temme et al algorithm and classical ones do it in O(1/delta) (where delta is the distance between the absolute values of the two largest eigenvalues of the underlying classical Markov chain)
Totally OT but I thought you might like this: http://baconbaconbaconbaconbacon.com/ .
anonymous,
You would need to design a Markov chain with the purification of the Gibbs state as its stationary state. Furthermore, you would have to give a method for calculating efficiently the transition matrix of the Markov chain, and that transition matrix will depend on the eigenvalues and eigenvectors of H.
So, yes, some details still remain to be worked out 🙂
to rrtucci: before you can start thinking about Szegedy walks, you first have to identify a stochastic process that has the Gibbs state as its fixed point, and that last thing is precisely the open problem that has been solved in the paper of Temme et al.
Arachnologist and diplopodologist Dr Jason E Bond at East Carolina University in Greenville, NC, is most recently well-known for naming a spider (Myrmekiaphila neilyoungi) after Rock and Roll Hall of Famer, Neil Young. Kristin Day of The Daily Reflector.