Matt Hastings wins a Simons Investigator 2012 award

The Simons Foundation has just announced the recipients of the Simons Investigator awards for 2012. These awards are similar in spirit to the MacArthur awards: the recipients did not know they were under consideration for the grant, and you can’t apply for it. Rather, you must be nominated by a panel. Each award winner will each receive $100,000 annually for 5 years (and possibly renewable for an additional 5 years), and their departments and institutions each get annual contributions of $10,000 and $22,000 respectively.
This year, they made awards to a collection of 21 mathematicians, theoretical physicists, and theoretical computer scientists. There are a lot of good names on this list, but the one that overlaps most with the quantum information community is undoubtedly Matt Hastings. The citation for his award specifically mentions his important contributions to quantum theory such as the 1D area law and the stability result for topological order (joint with Bravyi and Michalakis). However, it doesn’t mention anything about superadditivity of quantum channels!
Here is the citation for those of you too lazy to click through:

Matthew Hastings’ work combines physical insight and mathematical power to make profound contributions to a range of topics in physics and related fields. His Ph.D. thesis produced breakthrough insights into the multifractal nature of diffusion-limited aggregation, a problem that had stymied statistical physicists for more than a decade. Hastings’ recent work has focused on proving rigorous results on fundamental questions of quantum theory, including the stability of topological quantum order under local perturbations. His results on area laws and quantum entanglement and his proof of  a remarkable extension of the Lieb-Schulz-Mattis theorem to dimensions greater than one have provided foundational mathematical insights into topological quantum computing and quantum mechanics more generally.

Congratulations to Matt and the rest of the 2012 recipients.

Rounding Error in the Defense Budget

I recently (and somewhat belatedly) came across the following news item:

NASA gets two military spy telescopes for astronomy

The gist of the article is that the National Reconnaissance Office (NRO) just donated two telescopes with greater optical capabilities than the Hubble space telescope. For free.
Ironically, NASA may not have the budget to actually put the telescopes into space and run them. This is sort of like if someone sees that you’re parched with thirst, and they decide to give you a bottle of wine that they aren’t interested in drinking anymore, because presumably they have much better wine now. But you’re too poor to afford a bottle opener.
The Hubble cost a lot of money to build. The low-end estimate is USD $2.5 billion, but that is probably an underestimate by a factor of 2. That’s a lot of money, but it will barely buy you a week in Iraq, if you’re the US military.
Let’s assume that the cost to build those telescopes was approximately the same as the Hubble. This means that the cost of the two NRO telescopes combined is about the same as the entire $7 billion budget of the NSF for FY2012.
Of course, US science does get money from the Department of Defense. But the “pure” science budget for the entire US is just a rounding error compared to the total DoD budget.

QIP 2013

QIP is in Beijing this time from Jan 21-25, 2013. And the call for papers has just been posted.

  • Submission deadline for talks: October 5, 2012
  • Notification for talk submissions: November 9, 2012
  • Submission deadline for posters: November 16, 2012
  • Notification for poster submissions: November 26, 2012

The submission guidelines continue to slowly evolve. Last year called for “2-3 pages in a reasonable font” and this year is a somewhat more precise “up to 3 pages in 11pt font with reasonable margins, not including references”. From my experience on PCs in the past (I’m not on the PC this year, though), I’ll mention that these short submissions sometimes are very good, but sometimes aren’t addressed at the key questions that the PC will want to consider, such as:

  • What is the key result?
  • How does it compare with previous work on this subject? And how does it fit into the broader context of other research?
  • What are the technical innovations necessary to achieve it, and possibly to overcome obstacles that stopped people before?
  • (If the topic area is unconventional, or the submission is about a newly invented problem.) Why is this a good area for people to study?

These are kind of basic, but it’s surprising how many good results are undermined by not explaining these points well.
Another feature of the CFP to notice is that authors are strongly recommended to link to a version of their paper on their arxiv, or to include a long version of their paper as an attachment. The idea here is that if a result is not ready to go on the arxiv, then it’s less likely to be correct. There was debate about making it mandatory to post a full version on the arxiv, but this idea was dropped because many people felt that this way we might miss out on the freshest results.
A related concern is about keeping the barriers to submitting to QIP low. Of course, travel is always a barrier, but hopefully we shouldn’t waste a lot of people’s time with elaborate submission requirements if we expect the acceptance rate to be around 20%. I think the current requirements are reasonable, as long as people don’t stress too much about making a special short version, e.g. by writing something informal about why the paper should be accepted and referring liberally to the long version instead of trying to pack as much as possible in three pages. But I can imagine other systems as well. What do you think about the way QIP is being run right now?

Quantum Frontiers

As a postdoc at Caltech, I would often have lunch with John Preskill.  About once per week, we would play a game. During the short walk back, I would think of a question to which I didn’t know the answer. Then with maybe 100 meters to go, I would ask John that question. He would have to answer the question via a 20 minute impromptu lecture given right away, as soon as we walked into the building.
Now, these were not easy questions. At least, not to your average person, or even your average physicist. For example, “John, why do neutrinos have a small but nonzero mass?” Perhaps any high-energy theorist worth their salt would know the answer to that question, but it simply isn’t part of the training for most physicists, especially those in quantum information science.
Every single time, John would give a clear, concise and logically well-organized answer to the question at hand. He never skimped on equations when they were called for, but he would often analyze these problems using simple symmetry arguments and dimensional analysis—undergraduate physics!  At the end of each lecture, you really felt like you understood the answer to the question that was asked, which only moments ago seemed like it might be impossible to answer.
But the point of this post is not to praise John. Insead, I’m writing it so that I can set high expectations for John’s new blog, called Quantum Frontiers. Yes, that’s right, John Preskill has a blog now, and I hope that he’ll exceed these high expectations with content of similar or higher quality to what I witnessed in those after-lunch lectures. (John, if you’re reading this, no pressure.)
And John won’t be the only one blogging. It seems that the entire Caltech IQIM will “bring you firsthand accounts of the groundbreaking research taking place inside the labs of IQIM, and to answer your questions about our past, present and future work on some of the most fascinating questions at the frontiers of quantum science.”
This sounds pretty exciting, and it’s definitely a welcome addition to the (underrepresented?) quantum blogosphere.

Threading the needle of mathematical consistency

The latest round of the debate between Aram and Gil Kalai is now up over at Goedel’s Lost Letter.
I realize that I’m preaching to the choir at this venue, but I thought I would highlight Aram’s response. He nicely sums up the conundrum for quantum skeptics with a biblical allusion:

…Gil and other modern skeptics need a theory of physics which is compatible with existing quantum mechanics, existing observations of noise processes, existing classical computers, and potential future reductions of noise to sub-threshold (but still constant) rates, all while ruling out large-scale quantum computers. Such a ropy camel-shaped theory would have difficulty in passing through the needle of mathematical consistency.

One of the things that puzzles me about quantum skeptics is that they always seem to claim that there is an in principle reason why large-scale quantum computation is impossible. I just completely fail to see how they infer this from existing lines of evidence. If the skeptics would alter their argument slightly, I might have some more sympathy. For example, why don’t we power everything with fusion by now? Is it because there is a fundamental principle that prevents us from harnessing fusion power? No! It just turned out to be much harder than anyone thought to generate a self-sustaining controlled fusion reaction. If skeptics were arguing that quantum computers are the new fusion reactors, then I can at least understand how they might think that in spite of continuing experimental progress. But in principle impossible? It seems like an awfully narrow needle to thread.

QIP 2012 videos!

The long awaited videos from QIP 2012 are now online.
Somewhat. If you see a video that you want that isn’t online, then tell the speaker to sign the release form! I’m looking at you Bravyi, Greiner, Landahl and Popescu! (And others.)
If you haven’t seen them before, then you should check out the talks by Haah (on 3-d local stabilizer codes) and Arad (on an improved 1-d area law).

A jack of two trades and a master of both

IBM, in recounting its history of arranging high-profile contests between humans and computers, describes the tension surrounding the final chess game between Deep Blue and Gary Kasparov.  One of the machine’s designers “vividly remembers the final game of the six-game match in 1997. Deep Blue was so dominant that the game ended in less than two hours. His wife, Gina, had planned on arriving at the venue in the Equitable Center in New York City to watch the second half, but it was over before she arrived. ”  To my mind, the machine’s raw chess prowess is less remarkable than its ability to multitask between chess and marriage.

300

Credit: Britton/NIST

One of the more exciting prospects for near-term experimental quantum computation is to realize a large-scale quantum simulator. Now getting a rigorous definition of quantum simulator is tricky, but intuitively the concept is clear: we wish to have quantum systems in the lab with tunable interactions which can be used to simulate other quantum systems that we might not be able to control, or even create, in their “native” setting. A good analogy is a scale model which might be used to simulate the fluid flow around an airplane wing. Of course, these days you would use a digital simulation of that wing with finite element analysis, but in the analogy, that would correspond to using a fault-tolerant quantum computer, a much bigger challenge to realize.
We’ve highlighted the ongoing progress in quantum simulators using optical lattices before, but now ion traps are catching up in interesting ways. They have literally leaped into the next dimension and trapped an astounding 300 ions in a 2D trap with a tunable Ising-like coupling. Previous efforts had a 1D trapping geometry and ~10 qubits; see e.g. this paper (arXiv).
J. W. Britton et al. report in Nature (arXiv version) that they can form a triangular lattice of beryllium ions in a Penning trap where the strength of the interaction between ions $latex i$ and $latex j$ can be tuned to $latex J_{i,j} sim d(i,j)^{-a}$ for any $latex 0<a<3$, where $latex d(i,j)$ is the distance between spins $latex i$ and $latex j$ by simply changing the detuning on their lasers. (They only give results up to $latex a=1.4$ in the paper, however.) They can change the sign of the coupling, too, so that the interactions are either ferromagnetic or antiferromagnetic (the more interesting case). They also have global control of the spins via a controllable homogeneous single-qubit coupling. Unfortunately, one of the things that they don’t have is individual addressing with the control.
In spite of the lack of individual control, they can still turn up the interaction strength beyond the point where a simple mean-field model agrees with their data. In a) and b) you see a pulse sequence on the Bloch sphere, and in c) and d) you see the probability of measuring spin-up along the z-axis. Figure c) is the weak-coupling limit where mean-field holds, and d) is where the mean-field no longer applies.
Credit: Britton/NIST

Whether or not there is an efficient way to replicate all of the observations from this experiment on a classical computer is not entirely clear. Of course, we can’t prove that they can’t be simulated classically—after all, we can’t even separate P from PSPACE! But it is not hard to imagine that we are fast approaching the stage where our quantum simulators are probing regimes that can’t be explained by current theory due to the computational intractability of performing the calculation using any existing methods. What an exciting time to be doing quantum physics!

Ghost Paper Dance!

In a belated revival of the Ghost Pontiff’s “Happy Paper Dance” ritual, I’d like to talk about the recent paper The k-local Pauli Commuting Hamiltonians Problem is in P by my student Jijiang (Johnny) Yan and his former advisor, Dave Bacon. The abstract is:

Given a Hamiltonian that is a sum of commuting few-body terms, the commuting Hamiltonian problem is to determine if there exists a quantum state that is the simultaneous eigenstate of all of these terms that minimizes each term individually. This problem is known to be in the complexity class quantum Merlin-Arthur, but is widely thought to not be complete for this class. Here we show that a limited form of this problem when the individual terms are all made up of tensor products of Pauli matrices is efficiently solvable on a classical computer and thus in the complexity class P. The problem can be thought of as the classical XOR-SAT problem over a symplectic vector space. This class of problems includes instance Hamiltonians whose ground states possess topological entanglement, thus showing that such entanglement is not always a barrier for the more general problem.

This result follows a long string of papers that discuss the complexity of finding the ground state energy of k-local Hamiltonians, usually modified by various adjectives like “commuting” or “frustration-free” or “Pauli” or “in d dimensions.” Typically, these problems are shown to be somewhere between NP and QMA, and the subtle differences between these relate to issues such as topological order and the quantum PCP conjecture. In fact, one specific inspiration for this paper was 1102.0770, which showed that 3-qubit (or even 3-qutrit) commuting Hamiltonians could not have topologically-ordered ground states, while 4-qubit commuting Hamiltonians include the toric code, and 2-qubit non-commuting Hamiltonians include things that look like the toric code.
This paper shows that, in the case of commuting Pauli Hamiltonians, the difference between 3-local and 4-local is not important from a complexity point of view; indeed, it is possible to efficiently find the ground state of even $latex O(n)$-local Hamiltonians.
At first this is shocking, but to see why it’s reasonable to expect this result, consider classical (commuting, Pauli) Hamiltonians. Determining whether these terms has a simultaneous ground state is equivalent to solving a linear system of equations over $latex mathbb{F}_2$, which of course can be done in poly-time. This paper extends that to general Paulis, but the algorithm still involves solving linear systems of equations–this time over $latex mathbb{F}_4$. It is one of my favorite examples of the power and simplicity of the Pauli matrices, tied perhaps with the elegant Wehner-Winter uncertainty relations for anti-commuting observables.

|Democrat> + |Republican> / sqrt(2)?

It has long been known that party politics exhibits quantum effects. (An excerpt, that I’m sure is not retaliation for Sokal’s hoax, is “…we show evidence using the Smith et. al data that a tenet of a classical model that has animated work in the field appears violated in a form that gives way naturally to embrace of the superposition principle and then suggest that the classical formalisms and theories of preference separability might best be viewed as special cases of the quantum versions.”)
But what use is this theory, if we can’t apply it to any situations of practical relevance? Finally, the theory of quantum politics has found a way to explain current politics with A Quantum Theory of Mitt Romney, in an article that actually does a better job of explaining complementarity, uncertainty, etc. than do a lot of popular articles (quantum leap, not so much).
As an example of what this new theory explains, here is a Feynman diagram depicting a collision between a Romney and an anti-Romney that yields an electron and a $20 bill.