Funding boost for the arXiv

This is fantastic news: starting this January, the Simons Foundation will provide the Cornell University Library with up to US $300k per year (for the next five years) of matching funds to help ensure the continued sustainability of arXiv.org. The funds are matched to donations by about 120 institutions in a dozen countries that are well funded and are heavy downloaders of articles from the arXiv. It is also providing an unconditional gift of $50k per year. Here’s the press release from the CUL.
I think it is pretty remarkable how an institution like the arXiv, which every reader of this blog will agree is absolutely indispensable for research, has struggled to make ends meet. This is especially true given that the amount of money it takes to keep it going is really just a drop in the bucket compared to other spending. Look at some of the numbers: in the last year alone, there were more than  50 million downloads worldwide and more than 76,000 articles submitted. To have open access to that kind of information for a total cost of about $1m per year? Priceless.

Alexei Kitaev wins Fundamental Physics Prize

Alexei Kitaev was just named as a co-recipient of the inaugural Fundamental Physics Prize, along with 8 other distinguished physicists. This is a brand new prize which was started by a Russian billionaire named Yuri Milner. More at the New York Times. Alexei is credited in the citation as follows:

For the theoretical idea of implementing robust quantum memories and fault-tolerant quantum computation using topological quantum phases with anyons and unpaired Majorana modes.

This is without question a well-deserved award. I know I’m not the only one who has said, only half joking, that I work on “Kitaev Theory”. A hearty congratulations to Alexei.
There’s more to this story, though! Since the NYT highlights it, there is no dancing around the fact that this is the largest monetary prize in the history of physics: US $3 million! And with big money comes big controversy. From the NYT, my emphasis:

Unlike the Nobel in physics, the Fundamental Physics Prize can be awarded to scientists whose ideas have not yet been verified by experiments, which often occurs decades later. Sometimes a radical new idea “really deserves recognition right away because it expands our understanding of at least what is possible,” Mr. Milner said.
Dr. Arkani-Hamed, for example, has worked on theories about the origin of the Higgs boson, the particle recently discovered at the Large Hadron Collider in Switzerland, and about how that collider could discover new dimensions. None of his theories have been proven yet. He said several were “under strain” because of the new data.

Given that this is worth more than the Nobel prize (“only” $1.2 million, usually shared by 3 people) what sort of incentives does this set up? I’m glad that Mr. Milner’s award will go to researchers with breakthrough ideas, but sometimes great ideas don’t agree with Nature! They will have other value, for sure, but is it too risky to reward “radical ideas” over correct ideas? Mr. Milner, who made his fortune investing in internet companies and knows a thing or two about risk, apparently doesn’t think so.
Update: Given that 5 of the recipients were string theorists, it is unsurprising that Peter Woit got there first to add some fuel to the fire.

This post was supported by Goldman-Sachs Grant No. GS98039

After my earlier post about the defense budget, I thought it might be nice if there were some other similar-sized revenue streams that we could tap into other than DoD funding.  It got me thinking… who has the most money? Governments aside (which already have schemes for funding science), it has to be large corporations and big investment banks.
While some large corporations have R & D divisions (e.g. the quantum group at IBM), I’m not aware of any investment bank that has one, despite the large number of physicists, mathematicians and computer scientists that they employ. Could we possibly get a bank to directly fund scientific research? After all, what is the entire NSF budget of $7 billion to a big investment bank? A JP Morgan executive loses that kind of money in the cushions of his couch.
Here is something that could possibly entice one of these entities to invest in physics: using neutrinos to do high-frequency trading. While all those other suckers are busy sending signals overland via satellites and fiber optics, you just take a short cut with a neutrino beam straight through the center of the earth!  My back-of-the-envelope calculation suggests an 18 ms difference to send a signal through the Earth from NYC to Shanghai rather than over the surface. You could make the trade and still have time to enjoy a quick blink afterward.
In fact, a group of physicists at Fermilab have recently done an experiment (arXiv) that demonstrated using a neutrino beam to (classically 🙂 ) communicate through the Earth. The bit rate was low, only .1 bits per second, and the distance was only 240m. I’m sure one of the milestones on their Goldman-Sachs grant is to get that up to 1bps and 1km before the program review.

Matt Hastings wins a Simons Investigator 2012 award

The Simons Foundation has just announced the recipients of the Simons Investigator awards for 2012. These awards are similar in spirit to the MacArthur awards: the recipients did not know they were under consideration for the grant, and you can’t apply for it. Rather, you must be nominated by a panel. Each award winner will each receive $100,000 annually for 5 years (and possibly renewable for an additional 5 years), and their departments and institutions each get annual contributions of $10,000 and $22,000 respectively.
This year, they made awards to a collection of 21 mathematicians, theoretical physicists, and theoretical computer scientists. There are a lot of good names on this list, but the one that overlaps most with the quantum information community is undoubtedly Matt Hastings. The citation for his award specifically mentions his important contributions to quantum theory such as the 1D area law and the stability result for topological order (joint with Bravyi and Michalakis). However, it doesn’t mention anything about superadditivity of quantum channels!
Here is the citation for those of you too lazy to click through:

Matthew Hastings’ work combines physical insight and mathematical power to make profound contributions to a range of topics in physics and related fields. His Ph.D. thesis produced breakthrough insights into the multifractal nature of diffusion-limited aggregation, a problem that had stymied statistical physicists for more than a decade. Hastings’ recent work has focused on proving rigorous results on fundamental questions of quantum theory, including the stability of topological quantum order under local perturbations. His results on area laws and quantum entanglement and his proof of  a remarkable extension of the Lieb-Schulz-Mattis theorem to dimensions greater than one have provided foundational mathematical insights into topological quantum computing and quantum mechanics more generally.

Congratulations to Matt and the rest of the 2012 recipients.

Rounding Error in the Defense Budget

I recently (and somewhat belatedly) came across the following news item:

NASA gets two military spy telescopes for astronomy

The gist of the article is that the National Reconnaissance Office (NRO) just donated two telescopes with greater optical capabilities than the Hubble space telescope. For free.
Ironically, NASA may not have the budget to actually put the telescopes into space and run them. This is sort of like if someone sees that you’re parched with thirst, and they decide to give you a bottle of wine that they aren’t interested in drinking anymore, because presumably they have much better wine now. But you’re too poor to afford a bottle opener.
The Hubble cost a lot of money to build. The low-end estimate is USD $2.5 billion, but that is probably an underestimate by a factor of 2. That’s a lot of money, but it will barely buy you a week in Iraq, if you’re the US military.
Let’s assume that the cost to build those telescopes was approximately the same as the Hubble. This means that the cost of the two NRO telescopes combined is about the same as the entire $7 billion budget of the NSF for FY2012.
Of course, US science does get money from the Department of Defense. But the “pure” science budget for the entire US is just a rounding error compared to the total DoD budget.

Quantum Frontiers

As a postdoc at Caltech, I would often have lunch with John Preskill.  About once per week, we would play a game. During the short walk back, I would think of a question to which I didn’t know the answer. Then with maybe 100 meters to go, I would ask John that question. He would have to answer the question via a 20 minute impromptu lecture given right away, as soon as we walked into the building.
Now, these were not easy questions. At least, not to your average person, or even your average physicist. For example, “John, why do neutrinos have a small but nonzero mass?” Perhaps any high-energy theorist worth their salt would know the answer to that question, but it simply isn’t part of the training for most physicists, especially those in quantum information science.
Every single time, John would give a clear, concise and logically well-organized answer to the question at hand. He never skimped on equations when they were called for, but he would often analyze these problems using simple symmetry arguments and dimensional analysis—undergraduate physics!  At the end of each lecture, you really felt like you understood the answer to the question that was asked, which only moments ago seemed like it might be impossible to answer.
But the point of this post is not to praise John. Insead, I’m writing it so that I can set high expectations for John’s new blog, called Quantum Frontiers. Yes, that’s right, John Preskill has a blog now, and I hope that he’ll exceed these high expectations with content of similar or higher quality to what I witnessed in those after-lunch lectures. (John, if you’re reading this, no pressure.)
And John won’t be the only one blogging. It seems that the entire Caltech IQIM will “bring you firsthand accounts of the groundbreaking research taking place inside the labs of IQIM, and to answer your questions about our past, present and future work on some of the most fascinating questions at the frontiers of quantum science.”
This sounds pretty exciting, and it’s definitely a welcome addition to the (underrepresented?) quantum blogosphere.

Threading the needle of mathematical consistency

The latest round of the debate between Aram and Gil Kalai is now up over at Goedel’s Lost Letter.
I realize that I’m preaching to the choir at this venue, but I thought I would highlight Aram’s response. He nicely sums up the conundrum for quantum skeptics with a biblical allusion:

…Gil and other modern skeptics need a theory of physics which is compatible with existing quantum mechanics, existing observations of noise processes, existing classical computers, and potential future reductions of noise to sub-threshold (but still constant) rates, all while ruling out large-scale quantum computers. Such a ropy camel-shaped theory would have difficulty in passing through the needle of mathematical consistency.

One of the things that puzzles me about quantum skeptics is that they always seem to claim that there is an in principle reason why large-scale quantum computation is impossible. I just completely fail to see how they infer this from existing lines of evidence. If the skeptics would alter their argument slightly, I might have some more sympathy. For example, why don’t we power everything with fusion by now? Is it because there is a fundamental principle that prevents us from harnessing fusion power? No! It just turned out to be much harder than anyone thought to generate a self-sustaining controlled fusion reaction. If skeptics were arguing that quantum computers are the new fusion reactors, then I can at least understand how they might think that in spite of continuing experimental progress. But in principle impossible? It seems like an awfully narrow needle to thread.

300

Credit: Britton/NIST

One of the more exciting prospects for near-term experimental quantum computation is to realize a large-scale quantum simulator. Now getting a rigorous definition of quantum simulator is tricky, but intuitively the concept is clear: we wish to have quantum systems in the lab with tunable interactions which can be used to simulate other quantum systems that we might not be able to control, or even create, in their “native” setting. A good analogy is a scale model which might be used to simulate the fluid flow around an airplane wing. Of course, these days you would use a digital simulation of that wing with finite element analysis, but in the analogy, that would correspond to using a fault-tolerant quantum computer, a much bigger challenge to realize.
We’ve highlighted the ongoing progress in quantum simulators using optical lattices before, but now ion traps are catching up in interesting ways. They have literally leaped into the next dimension and trapped an astounding 300 ions in a 2D trap with a tunable Ising-like coupling. Previous efforts had a 1D trapping geometry and ~10 qubits; see e.g. this paper (arXiv).
J. W. Britton et al. report in Nature (arXiv version) that they can form a triangular lattice of beryllium ions in a Penning trap where the strength of the interaction between ions i and j can be tuned to J_{i,j} sim d(i,j)^{-a} for any 0<a<3, where d(i,j) is the distance between spins i and j by simply changing the detuning on their lasers. (They only give results up to a=1.4 in the paper, however.) They can change the sign of the coupling, too, so that the interactions are either ferromagnetic or antiferromagnetic (the more interesting case). They also have global control of the spins via a controllable homogeneous single-qubit coupling. Unfortunately, one of the things that they don’t have is individual addressing with the control.
In spite of the lack of individual control, they can still turn up the interaction strength beyond the point where a simple mean-field model agrees with their data. In a) and b) you see a pulse sequence on the Bloch sphere, and in c) and d) you see the probability of measuring spin-up along the z-axis. Figure c) is the weak-coupling limit where mean-field holds, and d) is where the mean-field no longer applies.
Credit: Britton/NIST

Whether or not there is an efficient way to replicate all of the observations from this experiment on a classical computer is not entirely clear. Of course, we can’t prove that they can’t be simulated classically—after all, we can’t even separate P from PSPACE! But it is not hard to imagine that we are fast approaching the stage where our quantum simulators are probing regimes that can’t be explained by current theory due to the computational intractability of performing the calculation using any existing methods. What an exciting time to be doing quantum physics!

Quantitative journalism with open data

This is the best news article I’ve seen in a while:

It’s the political cure-all for high gas prices: Drill here, drill now. But more U.S. drilling has not changed how deeply the gas pump drills into your wallet, math and history show.
A statistical analysis of 36 years of monthly, inflation-adjusted gasoline prices and U.S. domestic oil production by The Associated Press shows no statistical correlation between how much oil comes out of U.S. wells and the price at the pump.

Emphasis added. It’s a great example of quantitative journalism. They took the simple and oft-repeated statement that increased US oil production reduces domestic gas prices (known colloquially as “drill baby drill”), and they subjected it to a few simple statistical tests for correlation and causality. The result is that there is no correlation, or at least not one that is statistically significant. They tested for causality using the notion of Granger causality, and they found that if anything, higher prices Granger-causes more drilling, not the other way around!
And here’s the very best part of this article. They published the data and the analysis so that you can check the numbers yourself or reach your own conclusion. From the data, here is a scatter plot between relative change in price per gallon (inflation adjusted) and the relative change in production:

What’s more, they asked several independent experts, namely three statistics professors and a statistician at an energy consulting firm, and they all backed and corroborated the analysis.
Kudos to Jack Gillum and Seth Borenstein of the Associated Press for this wonderful article. I hope we can see more examples of quantitative journalism like this in the future, especially with open data.

The Nine Circles of LaTeX Hell

This guy had an overfull hbox
This guy had an overfull hbox
Poorly written LaTeX is like a rash. No, you won’t die from it, but it needlessly complicates your life and makes it difficult to focus on pertinent matters. The victims of this unfortunate blight can be both the readers, as in the case of bad typography, or yourself and your coauthors, as in the case of bad coding practice.
Today, in an effort to combat this particular scourge (and in keeping with the theme of this blog’s title), I will be your Virgil on a tour through the nine circles of LaTeX hell. My intention is not to shock or frighten you, dear Reader. I hope, like Dante before me, to promote a more virtuous lifestyle by displaying the wages of sin. However, unlike Dante I will refrain from pointing out all the famous people that reside in these various levels of hell. (I’m guessing Dante already had tenure when he wrote The Inferno.)
The infractions will be minor at first, becoming progressively more heinous, until we reach utter depravity at the ninth level. Let us now descend the steep and savage path.

1) Using {\it ...} and {\bf ...}, etc.

Admittedly, this point is a quibble at best, but let me tell you why to use \textit{...} and \textbf{...} instead. First, \it and \bf don’t perform correctly under composition (in fact, they reset all font attributes), so {\it {\bf ...}} does not produce bold italics as you might expect. Second, it fails to correct the spacing for italicized letters. Compare \text{{\it test}text} to \text{\textit{test}text} and notice how crowded the former is.

2) Using \def

\def is a plain TeX command that writes macros without first checking if there was a preexisting macro. Hence it will overwrite something without producing an error message. This one can be dangerous if you have coauthors: maybe you use \E to mean \mathbb{E}, while your coauthor uses it to mean \mathcal{E}. If you are writing different sections of the paper, then you might introduce some very confusing typos. Use \newcommand or \renewcommand instead.

3) Using $$...$$

This one is another plain TeX command. It messes up vertical spacing within formulas, making them inconsistent, and it causes fleqn to stop working. Moreover, it is syntactically harder to parse since you can’t detect an unmatched pair as easily. Using [...] avoids these issues.

4) Using the eqnarray environment

If you don’t believe me that eqnarray is bad news, then ask Lars Madson, the author of “Avoid eqnarray!”, a 10 page treatise on the subject. It handles spacing in an inconsistent manner and will overwrite the equation numbers for long equations. You should use the amsmath package with the align environment instead.
Now we begin reaching the inner circles of LaTeX hell, where the crimes become noticeably more sinister.

5) Using standard size parentheses around display size fractions

Consider the following abomination: (\displaystyle \frac{1+x}{2})^n (\frac{x^k}{1+x^2})^m = (\int_{-\infty}^{\infty} \mathrm{e}^{-u^2}\mathrm{d}u )^2.
Go on, stare at this for one minute and see if you don’t want to tear your eyes out. Now you know how your reader feels when you are too lazy to use \left and \right.

6) Not using bibtex

Manually writing your bibliography makes it more likely that you will make a mistake and adds a huge unnecessary workload to yourself and your coauthors. If you don’t already use bibtex, then make the switch today.

7) Using text in math mode

Writing H_{effective} is horrendous, but even H_{eff} is an affront. The broken ligature makes these examples particularly bad. There are lots of ways to avoid this, like using \text or \mathrm, which lead to the much more elegant and legible H_{\text{eff}}. Don’t use \mbox, though, because it doesn’t get the font size right: H_{\mbox{eff}}.

8 ) Using a greater-than sign for a ket

At this level of hell in Dante’s Inferno, some of the accursed are being whipped by demons for all eternity. This seems to be about the right level of punishment for people who use the obscenity |\psi>.

9) Not using TeX or LaTeX

This one is so bad, it tops Scott’s list of signs that a claimed mathematical breakthrough is wrong. If you are typing up your results in Microsoft Word using Comic Sans font, then perhaps you should be filling out TPS reports instead of writing scientific papers.