QIP 2015 Return of the Live-blogging, Day 1

Jan 14 update at the end.

The three Pontiffs are reunited at QIP 2015 and, having forgotten how painful liveblogging was in the past, are doing it again. This time we will aim for some slightly more selective comments.

In an ideal world the QIP PC would have written these sorts of summaries and posted them on scirate, but instead they are posted on easychair where most of you can’t access them. Sorry about this! We will argue at the business meeting for a more open refereeing process.

The first plenary talk was:

Ran Raz (Weizmann Institute)
How to Delegate Computations: The Power of No-Signaling Proofs
TR13-183

Why is the set of no-signalling distributions worth looking at? (That is, the set of conditional probability distributions p(a,b|x,y) that have well-defined marginals p(a|x) and p(b|y).) One way to think about it is as a relaxation of the set of “quantum” distributions, meaning the input-output distributions that are compatible with entangled states. The no-signalling polytope is defined by a polynomial number of linear constraints, and so is the sort of relaxation that is amenable to linear programming, whereas we don’t even know whether the quantum value of a game is computable. But is the no-signalling condition ever interesting in itself?

Raz and his coauthors (Yael Kalai and Ron Rothblum) prove a major result (which we’ll get to below) about the computational power of multi-prover proof systems where the provers have access to arbitrary non-signalling distributions. But they began by trying to prove an apparently unrelated classical crypto result. In general, multiple provers are stronger than one prover. Classically we have MIP=NEXP and IP=PSPACE, and in fact that MIP protocol just requires one round, whereas k rounds with a single prover is (roughly) within the k’th level of the polynomial hierarchy (i.e. even below PSPACE). So simulating many provers with one prover seems in general crazy.

But suppose instead the provers are computationally limited. Suppose they are strong enough for the problem to be interesting (i.e. they are much stronger than the verifier, so it is worthwhile for the verifier to delegate some nontrivial computation to them) but to weak to break some FHE (fully homomorphic encryption) scheme. This requires computational assumptions, but nothing too outlandish. Then the situation might be very different. If the verifier sends its queries using FHE, then one prover might simulate many provers without compromising security. This was the intuition of a paper from 2000, which Raz and coauthors finally are able to prove. The catch is that even though the single prover can’t break the FHE, it can let its simulated provers play according to a no-signalling distribution. (Or at least this possibility cannot be ruled out.) So proving the security of 1-prover delegated computation requires not only the computational assumptions used for FHE, but also a multi-prover proof system that is secure against no-signalling distributions.

Via this route, Raz and coauthors found themselves in QIP territory. When they started it was known that

This work nails down the complexity of the many-prover setting, showing that EXP is contained in MIPns[poly provers], so that in fact that classes are equal.

It is a nice open question whether the same is true for a constant number of provers, say 3. By comparison, three entangled provers or two classical provers are strong enough to contain NEXP.

One beautiful consequence is that optimizing a linear function over the no-signalling polytope is roughly a P-complete problem. Previously it was known that linear programming was P-complete, meaning that it was unlikely to be solvable in, say, log space. But this work shows that this is true even if the constraints are fixed once and for all, and only the objective function is varied. (And we allow error.) This is established in a recent followup paper [ECCC TR14-170] by two of the same authors.

Francois Le Gall.
Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments
abstract arXiv:1407.0085

A technical tour-de-force that we will not do justice to here. One intriguing barrier-breaking aspect of the work is that all previous algorithms for triangle finding worked equally well for the standard unweighted case as well as a weighted variant in which each edge is labeled by a number and the goal is to find a set of edges (a,b), (b,c), (c,a) whose weights add up to a particular target. Indeed this algorithm has a query complexity for the unweighted case that is known to be impossible for the weighted version. A related point is that this shows the limitations of the otherwise versatile non-adaptive learning-graph method.

Ryan O’Donnell and John Wright
Quantum Spectrum Testing
abstract arXiv:1501.05028

A classic problem: given \rho^{\otimes n} for \rho an unknown d-dimensional state, estimate some property of \rho. One problem where the answer is still shockingly unknown is to estimate \hat\rho in a way that achieves \mathbb{E} \|\rho-\hat \rho\|_1 \leq\epsilon.
Results from compressed sensing show that n = \tilde\Theta(d^2r^2) for single-copy two-outcome measurements of rank-r states with constant error, but if we allow block measurements then maybe we can do better. Perhaps O(d^2/\epsilon) is possible using using the Local Asymptotic Normality results of Guta and Kahn [0804.3876], as Hayashi has told me, but the details are – if we are feeling generous – still implicit. I hope that he, or somebody, works them out. (18 Jan update: thanks Ashley for fixing a bug in an earlier version of this.)

The current talk focuses instead on properties of the spectrum, e.g. how many copies are needed to distinguish a maximally mixed state of rank r from one of rank r+c? The symmetry of the problem (invariant under both permutations and rotations of the form U^{\otimes n}) means that we can WLOG consider “weak Schur sampling” meaning that we measure which S_n \times U_d irrep our state lies in, and output some function of this result. This irrep is described by an integer partition which, when normalized, is a sort of mangled estimate of the spectrum. It remains only to analyze the accuracy of this estimator in various ways. In many of the interesting cases we can say something nontrivial even if n= o(d^2). This involves some delicate calculations using a lot of symmetric polynomials. Some of these first steps (including many of the canonical ones worked out much earlier by people like Werner) are in my paper quant-ph/0609110 with Childs and Wocjan. But the current work goes far far beyond our old paper and introduces many new tools.

Han-Hsuan Lin and Cedric Yen-Yu Lin. Upper bounds on quantum query complexity inspired by the Elitzur-Vaidman bomb tester
abstract arXiv:1410.0932

This talk considers a new model of query complexity inspired by the Elitzur-Vaidman bomb tester. The bomb tester is a classic demonstration of quantum weirdness: You have a collection of bombs that have a detonation device so sensitive that even a single photon impacting it will set it off. Some of these bombs are live and some are duds, and you’d like to know which is which. Classically, you don’t stand a chance, but quantum mechanically, you can put a photon into a beamsplitter and place the bomb in one arm of a Mach-Zender interferometer. A dud will destroy the interference effects, and a homodyne detector will always click the same way. But you have a 50/50 chance of detecting a live bomb if the other detector clicks! There are various tricks that you can play related to the quantum Zeno effect that let you do much better than this 50% success probability.

The authors define a model of query complexity where one risks explosion for some events, and they showed that the quantum query complexity is related to the bomb query complexity by B(f) = \Theta(Q(f)^2). There were several other interesting results in this talk, but we ran out of steam as it was the last talk before lunch.

Kirsten Eisentraeger, Sean Hallgren, Alexei Kitaev and Fang Song
A quantum algorithm for computing the unit group of an arbitrary degree number field
STOC 2014

One unfortunate weakness of this work: The authors, although apparently knowledgeable about Galois theory, don’t seem to know about this link.

The unit group is a fundamental object in algebraic number theory. It comes up frequently in applications as well, and is used for fully homomorphic encryption, code obfuscation, and many other things.

My [Steve] personal way of understanding the unit group of a number field is that it is a sort of gauge group with respect to the factoring problem. The units in a ring are those numbers with multiplicative inverses. In the ring of integers, where the units are just \pm1 , we can factor composite numbers into 6 = 3 \times 2 = (-3)\times (-2). Both of these are equally valid factorizations; they are equivalent modulo units. In more complicated settings where unique factorization fails, we have factorization into prime ideals, and the group of units can in general become infinite (though always discrete).

The main result of this talk is a quantum algorithm for finding the unit group of a number field of arbitrary degree. One of the technical problems that they had to solve to get this result was to solve the hidden subgroup problem on a continuous group, namely \mathbb{R}^n.

The speaker also announced some work in progress: a quantum algorithm for the principal ideal problem and the class group problem in arbitrary degree number fields [Biasse Song ‘14]. It sounds like not all the details of this are finished yet.

Dominic Berry, Andrew Childs and Robin Kothari
Hamiltonian simulation with nearly optimal dependence on all parameters
abstract 1501.01715

Hamiltonian simulation is not only the original killer app of quantum computers, but also a key subroutine in a large and growing number of problems. I remember thinking it was pretty slick that higher-order Trotter-Suzuki could achieve a run-time of \|H\|t\text{poly}(s)(\|H\|t/\epsilon)^{o(1)} where t is the time we simulate the Hamiltonian for and s is the sparsity. I also remember believing that the known optimality thoerems for Trotter-Suzuki (sorry I can’t find the reference, but it involves decomposing e^{t(A+B)} for the free Lie algebra generated by A,B) meant that this was essentially optimal.

Fortunately, Berry, Childs and Kothari (and in other work, Cleve) weren’t so pessimistic, and have blasted past this implicit barrier. This work synthesizes everything that comes before to achieve a run-time of \tau \text{poly}\log(\tau/\epsilon) where \tau = \|H\|_{\max}st (where \|H\|_{\max} is \max_{i,j} |H_{i,j}| can be related to the earlier bounds via \|H\| \leq d \|H\|_{\max}).

One quote I liked: “but this is just a generating function for Bessel functions!” Miraculously, Dominic makes that sound encouraging. The lesson I suppose is to find an important problem (like Hamiltonian simulation) and to approach it with courage.

Salman Beigi and Amin Gohari
Wiring of No-Signaling Boxes Expands the Hypercontractivity Ribbon
abstract arXiv:1409.3665

If you have some salt water with salt concentration 0.1% and some more with concentration 0.2%, then anything in the range [0.1, 0.2] is possible, but no amount of mixing will give you even a single drop with concentration 0.05% or 0.3%, even if you start with oceans at the initial concentrations. Similarly if Alice and Bob share an unlimited number of locally unbiased random bits with correlation \eta they cannot produce even a single bit with correlation \eta' > \eta if they don’t communicate. This was famously proved by Reingold, Vadhan and Wigderson.

This talk does the same thing for no-signaling boxes. Let’s just think about noisy PR boxes to make this concrete. The exciting thing about this work is that it doesn’t just prove a no-distillation theorem but it defines an innovative new framework for doing so. The desired result feels like something from information theory, in that there is a monotonicity argument, but it needs to use quantities that do not increase with tensor product.

Here is one such quantity. Define the classical correlation measure \rho(A,B) = \max \text{Cov}(f,g) where f:A\mapsto \mathbb{R}, g:B\mapsto \mathbb{R} and each have variance 1. Properties:

  • 0 \leq \rho(A,B) \leq 1
  • \rho(A,B) =0 iff p_{AB} = p_A \cdot p_B
  • \rho(A^n, B^n) = \rho(A,B)
  • for any no-signaling box, \rho(A,B) \leq \max(\rho(A,B|X,Y), \rho(X,Y))

Together this shows that any wiring of boxes cannot increase this quantity.

The proof of this involves a more sophisticated correlation measure that is not just a single number but is a region called the hypercontractivity ribbon (originally due to [Ahlswede, Gacs ‘76]). This is defined to be the set of (\lambda_1, \lambda_2) such that for any f,g we have
\mathbb{E}[f_A g_B] \leq \|f_A\|_{\frac{1}{\lambda_1}} \|g_B\|_{\frac{1}{\lambda_2}}
A remarkable result of [Nair ‘14] is that this is equivalent to the condition that
I(U;AB) \geq \lambda_1 I(U:A) + \lambda_2 I(U:B)
for any extension of the distribution on AB to one on ABU.

Some properties.

  • The ribbon is [0,1]\times [0,1] iff A,B are independent.
  • It is stable under tensor power.
  • monotonicity: local operations on A,B enlarge R

For boxes define R(A,B|X,Y) = \cap_{x,y} R(A,B|x,y). The main theorem is then that rewiring never shrinks hypercontractivity ribbon. And as a result, PR box noise cannot be reduced.

These techniques are beautiful and seem as though they should have further application.

Masahito Hayashi
Estimation of group action with energy constraint
abstract arXiv:1209.3463

Your humble bloggers were at this point also facing an energy constraint which limited our ability to estimate what happened. The setting is that you pick a state, nature applies a unitary (specifically from a group representation) and then you pick a measurement and try to minimize the expected error in estimating the group element corresponding to what nature did. The upshot is that entanglement seems to give a quadratic improvement in metrology. Noise (generally) destroys this. This talk showed that a natural energy constraint on the input also destroys this. One interesting question from Andreas Winter was about what happens when energy constraints are applied also to the measurement, along the lines of 1211.2101 by Navascues and Popescu.

Jan 14 update: forgot one! Sorry Ashley.

Ashley Montanaro
Quantum pattern matching fast on average
abstract
arXiv:1408.1816

Continuing the theme of producing shocking and sometimes superpolynomial speedups to average-case problems, Ashley shows that finding a random pattern of length m in a random text of length n can be done in quantum time \tilde O(\sqrt{n/m}\exp(\sqrt{\log m})). Here “random” means something subtle. The text is uniformly random and the pattern is either uniformly random (in the “no” case) or is a random substring of the text (in the “yes” case). There is also a higher-dimensional generalization of the result.

One exciting thing about this is that it is a fairly natural application of Kuperberg’s algorithm for the dihedral-group HSP; in fact the first such application, although Kuperberg’s original paper does mention a much less natural such variant. (correction: not really the first – see Andrew’s comment below.)
It is interesting to think about this result in the context of the general question about quantum speedups for promise problems. It has long been known that query complexity cannot be improved by more than a polynomial (perhaps quadratic) factor for total functions. The dramatic speedups for things like the HSP, welded trees and even more contrived problems must then use the fact that they work for partial functions, and indeed even “structured” functions. Pattern matching is of course a total function, but not one that will ever be hard on average over a distribution with, say, i.i.d. inputs. Unless the pattern is somehow planted in the text, most distributions simply fail to match with overwhelming probability. It is funny that for i.i.d. bit strings this stops being true when m = O(\log n), which is almost exactly when Ashley’s speedup becomes merely quadratic. So pattern matching is a total function whose hard distributions all look “partial” in some way, at least when quantum speedups are possible. This is somewhat vague, and it may be that some paper out there expresses the idea more clearly.
Part of the strength of this paper is then finding a problem where the promise is so natural. It gives me new hope for the future relevance of things like the HSP.

Your Guide to Australian Slang for QIP Sydney

AustralianWhiteIbis gobeirneTo everyone that’s attending QIP, welcome to Sydney!

Since I’ve already had to clarify a number of the finer points of Australian slang to my fellow attendees, I thought I would solve the general problem and simply post a helpful dictionary that translates some uniquely Australian words and usages into standard American English.

Also, this thing on the right is called an ibis. It’s not venomous.

Coffee

Flat white – Try this at least once while you’re here, preferably prepared by a highly skilled barista at one of the better cafes. It’s similar to a latte or to a cappuccino without the foam, but there are important differences.

Long black – Australian version of the Americano, a bit stronger and with crema. It’s the closest you’ll get to a cup of filtered drip coffee, if that’s your thing.

Short black – If you want a standard espresso, order a short black.

The Beach

Thongs – Sandals, or flip-flops. The highest level of dress code in Australia is “no thongs”.

Togs – Swimwear.

Esky – A cooler; the place where you store your beer to keep it cold while you’re getting pissed at the beach.

Pissed – Drunk; the state that a nontrivial fraction of people are in because it’s legal to drink at the beach.

Sunnies – Sunglasses.

Mozzy – Mosquito. Usually not a problem at the beach because there is almost always a breeze.

The Pub

Schooner – (SKOO-ner) A medium-sized glass of beer.

Jug – A pitcher of beer.

Shout – To buy a beer for someone, or a round of beers for your table.

Skol – To chug a beer. Usage: “Hey Robbo, if you skol that schooner I’ll shout you a jug.”

Hotel – In addition to the standard meaning, a hotel is a particular style of pub. It usually has high occupancy and a limited beer selection (though this is starting to improve as craft beer is finally catching on here).

Sports

Football – see “Footy”.

Footy – Rugby. It comes in several varieties, with League and Union being the two most popular varieties.

Gridiron – American football. Not generally watched much down under.

Cricket – An inscrutable game that takes 5 days to play. I think the only way you could like this game is to have the British invade, conquer your land, and occupy your territory under their colonial yoke for at least a few generations. That seems to be how everyone else got into it.

Rooting – Do not make the mistake of saying that you are “rooting for team X”; in Australia, rooting is slang for having sex.

Miscellaneous

Arvo – Afternoon.

Bickie – A cookie or biscuit.

Brekkie – Breakfast.

Fair dinkum – The closest translation is probably “for real”. It’s used to express the sentiment that you’re not deceiving the listener or exaggerating your claims.

Should Papers Have Unit Tests?

Perhaps the greatest shock I’ve had in moving from the hallowed halls of academia to the workman depths of everyday software development is the amount of testing that is done when writing code. Likely I’ve written more test code than non-test code over the last three plus years at Google. The most common type of test I write is a “unit test”, in which a small portion of code is tested for correctness (hey Class, do you do what you say?). The second most common type is an “integration test”, which attempts to test that the units working together are functioning properly (hey Server, do you really do what you say?). Testing has many benefits: correctness of code, of course, but it is also important for ease of changing code (refactoring), supporting decoupled and simplified design (untestable code is often a sign that your units are too complicated, or that your units are too tightly coupled), and more.
Over the holiday break, I’ve been working on a paper (old habit, I know) with lots of details that I’d like to make sure I get correct. Throughout the entire paper writing process, one spends a lot of time checking and rechecking the correctness of the arguments. And so the thought came to my mind while writing this paper, “boy it sure would be easier to write this paper if I could write tests to verify my arguments.”
In a larger sense, all papers are a series of tests, small arguments convincing the reader of the veracity or likelihood of the given argument. And testing in a programming environment has a vital distinction that the tests are automated, with the added benefit that you can run them often as you change code and gain confidence that the contracts enforced by the tests have not been broken. But perhaps there would be a benefit to writing a separate argument section with “unit tests” for different portions of a main argument in a paper. Such unit test sections could be small, self-contained, and serve as supplemental reading that could be done to help a reader gain confidence in the claims of the main text.
I think some of the benefits for having a section of “unit tests” in a paper would be

  • Documenting limit tests A common trick of the trade in physics papers is to take a parameter to a limiting value to see how the equations behave. Often one can recover known results in such limits, or show that certain relations hold after you scale these. These types of arguments give you confidence in a result, but are often left out of papers. This is sort of kin to edge case testing by programmers.
  • Small examples When a paper gets abstract, one often spends a lot of time trying to ground oneself by working with small examples (unless you are Grothendieck, of course.) Often one writes a paper by interjecting these examples in the main flow of the paper, but these sort of more naturally fit in a unit testing section.
  • Alternative explanation testing When you read an experimental physics paper, you often wonder, am I really supposed to believe the effect that they are talking about. Often large portions of the paper are devoted to trying to settle such arguments, but when you listen to experimentalists grill each other you find that there is an even further depth to these arguments. “Did you consider that your laser is actually exciting X, and all you’re seeing is Y?” The amount of this that goes on is huge, and sadly, not documented for the greater community.
  • Combinatorial or property checks Often one finds oneself checking that a result works by doing something like counting instances to check that they sum to a total, or that a property holds before and after a transformation (an invariant). While these are useful for providing evidence that an argument is correct, they can often feel a bit out of place in a main argument.

Of course it would be wonderful if there we a way that these little “units” could be automatically executed. But the best path I can think of right now towards getting to that starts with the construction of an artificial mind. (Yeah, I think perhaps I’ve been at Google too long.)

Self-correcting Fractals

A really exciting paper appeared on the arxiv today: A proposal for self-correcting stabilizer quantum memories in 3 dimensions (or slightly less), by Courtney Brell. It gives the strongest evidence yet that self-correcting quantum memories are possible in “physically realistic” three-dimensional lattice models. In particular, Courtney has constructed families of local Hamiltonians in 3D whose terms consist of X- and Z-type stabilizer generators and that show phase-transition behavior akin to the 2D Ising model for both the X- and Z-type error sectors. This result doesn’t achieve a complete theoretical solution to the question of whether self-correcting quantum memories can exist in principle, as I’ll explain below, but it makes impressive progress using a mix of rigorous analysis and physical argument.

First, what do I mean by “physically realistic”? Well, obviously I don’t mean physically realistic (without quotes)—that’s a much greater challenge. Rather, we want to abstractly characterize some features that should be shared by a physically realistic implementation, but with enough leeway that a theorist can get creative. To capture this, Courtney introduces the so-called Caltech Rules for a self-correcting quantum memory.

The phrase “the Caltech Rules” is (I believe) attributable to David Poulin. Quantum memory aficionados have been debating these rules in emails and private discussions for the last few years, but I think this is the first time someone has put them in print. As rules, they aren’t really set in stone. They consist of a list of criteria that are either necessary or seemingly necessary to avoid models that are self-correcting for trivial and unphysical reasons (e.g., scaling the coupling strengths as a function of n). In Courtney’s version of the rules, we require a model with finite-dimensional spins (so no bosonic or fermionic models allowed… this might be objectionable to some people), bounded-strength short-range interactions between the spins, a constant density of spins, a perturbatively stable degenerate ground space for the encoded states, an efficient decoding algorithm, and an exponential memory lifetime against low-temperature thermal noise. One might wish to add even more desiderata like translation-invariant couplings or a spectral gap (which is closely related to stability), but finding a self-correcting memory subject to these constraints is already a tall order. For some more discussion on these points, check out another awesome paper that came on the arxiv yesterday, an excellent review article on quantum memories at finite temperature by Ben Brown et al..

To motivate the construction, it helps to remember everyone’s favorite models, the Ising model and the Toric code. When the temperature T is zero, it’s easy to store a classical bit using the 1D Ising model; this is just a repetition code. Similarly, the 2D toric code can store quantum information at T=0. Both of these codes become unstable as memories at T\textgreater 0 because of the presence of string-like logical operators. The physical process by which these strings are created costs some energy, but then the strings can stretch and grow without any energy cost, and thermal fluctuations alone will create enough strings in a short time to cause a decoding failure. By contrast, the 2D Ising model can store a classical bit reliably for an exponential amount of time if you encode in the total magnetization and you are below the Curie temperature. The logical operators are now membranes that cost energy to grow. Similarly, the 4D toric code has such a phase transition, and this is because the X- and Z-type errors both act analogously to 2D Ising models with membranous logical operators.

Sierpinski carpet
Sierpinski carpet, with edges placed to form a “Sierpinski graph”.

The codes that Courtney defines are called embeddable fractal product codes (EFPC). The idea is that, if a product of two 1D Ising models isn’t a 2D self-correcting model, but a product of two 2D Ising models is a self-correcting memory, then what happens if we take two 1.5D Ising models and try to make a 3D self-correcting memory? The backbone of the construction consists of fractals such as the Sierpinski carpet that have infinite ramification order, meaning that an infinite number of edges on an associated graph must be cut to split it into two infinite components. Defining an Ising model on the Sierpinski graph yields a finite-temperature phase transition for the same reason as the 2D Ising model, the Peierls argument, which is essentially a counting argument about the density of domain walls in equilibrium with fixed boundary conditions. This is exactly the kind of behavior needed for self-correction.

cut
Splitting the Sierpinski graph into two infinite components necessarily cuts an infinite number of edges.

Using the adjacency of the Sierpinski graph, the next step is to use a toric code-like set of generators on this graph, paying careful attention to the boundary conditions (in particular, plaquette terms are placed in such a way that the stabilizer group contains all the cycles that bound areas of the fractal, at any length scale). Then using homological product codes gives a natural way to combine X-like and Z-like copies of this code into a new code that naturally lives in four dimensions. Although the natural way to embed this code requires all four spatial dimensions, it turns out that a low-distortion embedding is possible with distortion bounded by a small constant, so these codes can be compressed into three dimensions while retaining the crucial locality properties.

Remarkably, this construction gives a finite-temperature phase transition for both the X- and Z-type errors. It essentially inherits this from the fact that the Ising models on the Sierpinski graph have phase transitions, and it is a very strong indication of self-correcting behavior.

However, there are some caveats. There are many logical qubits in this code (in fact, the code has constant rate), and only the qubits associated to the coarsest features of the fractal have large distance. There are many logical qubits associated to small-scale features that have small distance and create an exponential degeneracy of the ground space. With such a large degeneracy, one worries about perturbative stability in the presence of a generic local perturbation. There are a few other caveats, for example the question of efficient decoding, but to me the issue of the degeneracy is the most interesting.

Overall, this is the most exciting progress since Haah’s cubic code. I think I’m actually becoming optimistic about the possibility of self-correction. It looks like Courtney will be speaking about his paper at QIP this year, so this is yet another reason to make it to Sydney this coming January.

A Breakthrough Donation for Computer Science

Lance Fortnow has a post summarizing some of the news affecting the CS community over the past month, including updates on various prizes as well as the significant media attention focusing on physics- and math-related topics such as movies about Turing and Hawking as well as Terrence Tao on the Colbert Report.

From his post, I just learned that former Microsoft chief executive Steven Ballmer is making a donation to Harvard that will endow twelve—that’s right, 12—new tenured and tenure-track faculty positions in computer science. This is fantastic news and will have a huge positive impact on Harvard CS.

One thing missing from Lance’s list was news about the Breakthrough Prizes in mathematics and fundamental physics. In case you’ve been living under a rock, these prizes give a very hefty US $3 million purse to the chosen recipients. The winners are all luminaries in their field, and it’s great to see them get recognition for their outstanding work.

On the other hand, juxtaposing Ballmer’s donation and the Breakthrough Prizes couldn’t offer a starker contrast. It costs the same amount—$3 million—to endow a university full professor with appointments in more than one discipline at Duke University. My initial googling would suggest that this is a pretty typical figure at top-tier institutions.

What if, instead of a offering a cash prize to the Breakthrough Prize winners, the reward was an upgrade to an endowed chair at the current institution subject to the condition that the existing position would go to a new tenured or tenure-track hire in the same field? This seems to be a much better investment in science overall because it will help build a community of researchers around the prize winner, and the marginal benefit to this community from associating with the prize winner is likely far greater than any extra incentive the researchers might get within the current system to simply strive to win $3M cash.

Goodbye Professor Tombrello

This morning I awoke to the horrible news that Caltech Physics Professor Tom Tombrello had passed away. Professor Tombrello was my undergraduate advisor, my research advisor, a mentor, and, most importantly a friend. His impact on me, from my career to the way I try to live my life, was profound.
Because life is surreal, just a few days ago I wrote this post that describes the event that led Professor Tombrello and I down entwined paths, my enrollment in his class Physics 11. Physics 11 was a class about how to create value in the world, disguised as a class about how to do “physics” research as an undergraduate. Indeed, in my own life, Professor Tombrello’s roll was to make me think really really hard about what it meant to create. Sometimes this creation was in research, trying to figure out a new approach or even a new problem. Sometimes this creation was in a new career, moving to Google to be given the opportunity to build high impact creations. I might even say that this creation extends into the far reaches of Washington state, where we helped bring about the creation of a house most unusual.
There are many stories I remember about Professor Tombrello. From the slightly amusing like the time after the Northridge earthquake when an aftershock shook our class while he was practicing his own special brand of teach, and we all just sort of sat still until we heard this assistant, Michelle, shout out “That’s it! I’m outta here!” and go storming out. To the time I talked with him following the loss of one of his family members, and could see the profound sadness even in a man who push optimistically forward at full speed.
Some portraits:

After one visit to Professor Tombrello, I actually recorded my thoughts on our conversation:

This blog post is for me, not for you. Brought to you by a trip down memory lane visiting my adviser at Caltech.
Do something new. Do something exciting. Excel. Whether the path follows your momentum is not relevant.
Don’t dwell. Don’t get stuck. Don’t put blinders on.
Consider how the problem will be solved, not how you are going to solve it.
Remember Feynman: solve problems.
Nothing is not interesting, but some things are boring.
Dyson’s driving lesson: forced intense conversation to learn what the other has to say.
Avoid confirmatory sources of news, except as a reminder of the base. Keep your ear close to the brains: their hushed obsessions are the next big news.
Learn something new everyday but also remember to forget the things not worth knowing.
Technically they can do it or they can’t, but you can sure help them do it better when they can.
Create. Create. Create.
Write a book, listen to Sandra Tsing Loh, investigate Willow Garage, and watch Jeff Bezos to understand how to be a merchant.
Create. Create. Create.

So tonight, I’ll have a glass of red wine to remember my professor, think of his family, and the students to whom he meant so much. And tomorrow I’ll pick myself up, and try to figure out just what I can create next.

Sailing Stones: Mystery No More

My first research project, my first research paper, was on a perplexing phenomenon: the sliding rocks of Death Valley’s Racetrack playa. Racetrack playa is a large desolate dry lake bed that has one distinguishing feature above and beyond its amazing flatness. At the south end of the playa are a large number of large rocks (one man size and smaller), and behind these rocks, if you visit in the summer, are long tracks caked into the dried earth of the playa. Apparently these rocks, during the winter months, move and leave these long tracks. I say apparently, because, for many many years, no one had ever seen these rocks move. Until now! The following video makes me extremely happy

This is a shot of one of the playa stones actually moving! This is the end result of a large study that sought to understand the mechanism behind the sliding stones, published recently in PloS one:

In 1993, fresh out of Yreka High School, I found myself surrounded by 200+ geniuses taking Caltech’s first year physics class, Physics 1 (med schools sometimes ask students at Caltech to verify that they know Calculus because the transcripts have just these low numerical course indicators on them, and of course Physics 1 couldn’t actually be physics with calculus, could it?) It would be a lie to say that this wasn’t intimidating: some of the TAs in the class where full physics professors! I remember a test where the average score was 0.5 out of 10 and perhaps it didn’t help that my roommate studied with a Nobel prize winner as a high school student. Or that another freshman in my class was just finishing a paper with his parents on black holes (or that his dad is one of the founders of the theory of inflation!) At times I considered transferring, because that is what all Caltech students do when they realized how hard Caltech is going to be, and also because it wasn’t clear to me what being a physics major got you.

One day in Physics 1 it was announced that there was a class that you could gain entrance to that was structured to teach you not physics, but how to do creative research. Creativity: now this was something I truly valued! It was called Physics 11 and it was run by one Professor Tom Tombrello (I’d later see his schedule on the whiteboard with the abbreviation T2). The only catch was that you had to get accepted into the class and to do this you had to do you best at solving a toy research problem, what the class termed a “hurdle”. The students from the previous class then helped select the new Physics 11 students based upon their performance on the hurdles. The first hurdle also caught my eye: it was a problem based upon the old song Mairzy Doats which my father had weekly sung while showering in the morning. So I set about working on the problem. I don’t remember much of my solution, except that it was long and involved lots of differential equations of increasing complexity. Did I mention that it was long? Really long. I handed in the hurdle, then promptly ran out of time to work on the second hurdle.

Because I’d not handed in the second hurdle, I sort of expected that I’d not get selected into the class. Plus I wasn’t even in the advanced section of physics 1 (the one TAed by the professors, now those kids were well prepared and smart!) But one late night I went to my mailbox, opened it, and found…nothing. I closed it, and then, for some strange reason, thought: hey maybe there is something stuck in there. So I returned and opened the box, dug deep, and pulled out an invitation to join physics 11! This story doesn’t mean much to you, but I can still smell, feel, and hear Caltech when I think of this event. Also I’ve always been under the impression that being accepted to this class was a mistake and really the invitation I got was meant for another student in a mailbox next to mine. But that’s a story for another session on the couch.

So I enrolled in Physics 11. It’s not much of a stretch to say that it was the inspiration for me to go to graduate school, to do a postdoc, and to become a pseudo-professor. Creative research is an amazing drug, and also, I believe, one of the great endeavors of humanity. My small contribution to the racetrack playa story was published in the Journal of Geology:

The basic mystery was what caused these rocks to move. Was it the wind? It seemed hard to get enough force to move the rocks. Was it ice? When you placed stakes around the rocks, some of the rocks moved out of the stakes and some did not. In the above paper we pointed out that a moving layer of water would mean that there was more wind down low that one would normally get because the boundary layer was moving. We also looked for the effect of said boundary layer on the rocks motion and found a small effect.

The answer, however, as to why the rocks moved, turned out to be even more wonderful. Ice sheets dislodged and bashing the rocks forward. A sort of combination of the two competing previous hypothesis! This short documentary explains it nicely

So, another mystery solved! We know more about how the world works, not on a level of fundamental physics, but on a level of, “because it is interesting”, and “because it is fun”, and isn’t that enough? Arthur C. Clarke, who famously gave airtime to these rocks, would, I think, have been very please with this turn of events

QIP 2015

Sydney_skyline_at_dusk_-_Dec_2008
The website is up for QIP 2015, which will be held this year in beautiful Sydney, Australia. Here is a timeline of the relevant dates:

  • Submission of talks deadline: Sep 12, 2014
  • Submission of posters deadline: Oct 25, 2014
  • Decision on talks and posters submitted before talk deadline: Oct 20, 2014
  • Decision on posters submitted after talk deadline: Nov 15, 2014
  • Tutorial Session: Jan 10-11, 2015
  • Main Conference: Jan 12-16, 2015

And students, don’t worry, there are plans to include some student support scholarships, so we hope that many of you can attend. We’re looking forward to seeing you all here!

Elsevier again, and collective action

We all know about higher education being squeezed financially. Government support is falling and tuition is going up. We see academic jobs getting scarcer, and more temporary. The pressure for research to focus on the short term is going up. Some of these changes may be fair, since society always has to balance its immediate priorities against its long-term progress. At other times, like when comparing the NSF’s $7.6 billion FY2014 budget request to the ongoing travesty that is military procurement, it does feel as though we are eating our seed corn for not very wise reasons.
Against this backdrop, the travesty that is scientific publishing may feel like small potatoes. But now we are starting to find out just how many potatoes. Tim Gowers has been doing an impressive job of digging up exactly how much various British universities pay for their Elsevier subscriptions. Here is his current list. Just to pick one random example, the University of Bristol (my former employer), currently pays Elsevier a little over 800,000 pounds (currently $1.35M) for a year’s access to their journals. Presumably almost all research universities pay comparable amounts.
To put this number in perspective, let’s compare it not to the F-35, but to something that delivers similar value: arxiv.org. Its total budget for 2014 is about 750,000 US dollars (depending on how you count overhead), and of course this includes access for the entire world, not only the University of Bristol. To be fair, ScienceDirect has about 12 times as many articles and the median quality is probably higher. But overall it is clearly vastly more expensive for society to have its researchers communicate in this way.
Another way to view the £800,000 price tag is in terms of the salaries of about 40 lecturers (\approx assistant professors), or some equivalent mix of administrators, lecturers and full professors. The problem is that these are not substitutes. If Bristol hired 40 lecturers, they would not each spend one month per year building nearly-free open-access platforms and convincing the world to use them; they would go about getting grants, recruiting grad students and publishing in the usual venues. There are problems of collective action, of the path dependence that comes with a reputation economy and of the diffuse costs and concentrated benefits of the current system.
I wish I could end with some more positive things to say. I think at least for now it is worth getting across the idea that there is a crisis, and that we should all do what we can to help with it, especially when we can do so without personal cost. In this way, we can hopefully create new social norms. For example, it is happily unconventional now to not post work on arxiv.org, and I hope that it comes to be seen also as unethical. In the past, it was common to debate whether QIP should have published proceedings. Now major CS conferences are cutting themselves loose from parasitic professional societies (see in particular the 3% vote in favor of the status quo) and QIP has begun debating whether to require all submissions be accompanied by arxiv posts (although this is of course not at all clear-cut). If we cannot have a revolution, hopefully we can at least figure out an evolutionary change towards a better scientific publishing system. And then we can try to improve military procurement.

Quantum computers can work in principle

Gil Kalai has just posted on his blog a series of videos of his lectures entitled “why quantum computers cannot work.”  For those of us that have followed Gil’s position on this issue over the years, the content of the videos is not surprising. The surprising part is the superior production value relative to your typical videotaped lecture (at least for the first overview video).

I think the high gloss on these videos has the potential to sway low-information bystanders into thinking that there really is a debate about whether quantum computing is possible in principle. So let me be clear.

There is no debate! The expert consensus on the evidence is that large-scale quantum computation is possible in principle.

Quoting “expert consensus” like this is an appeal to authority, and my esteemed colleagues will rebuke me for not presenting the evidence. Aram has done an admirable job of presenting the evidence, but the unfortunate debate format distorts perception of the issue by creating the classic “two sides to a story” illusion. I think it’s best to be unequivocal to avoid misunderstanding.

The program that Gil lays forth is a speculative research agenda, devoid of any concrete microscopic physical predictions, and no physicist has investigated any of it because it is currently neither clear enough nor convincing enough. At the same time, it would be extremely interesting if it one day leads to a concrete conjectured model of physics in which quantum computers do not work. To make the ideas more credible, it would help to have a few-qubit model that is at least internally consistent, and even better, one that doesn’t contradict the dozens of on-going experiments. I genuinely hope that Gil or someone else can realize this thrilling possibility someday.

For now, though, the reality is that quantum computation continues to make exciting progress every year, both on theoretical and experimental levels, and we have every reason to believe that this steady progress will continue. Quantum theory firmly predicts (via the fault-tolerance threshold theorem) that large-scale quantum computation should be achievable if noise rates and correlations are low enough, and we are fast approaching the era where the experimentally achievable noise rates begin to touch the most optimistic threshold estimates. In parallel, the field continues to make contributions to other lines of research in high-energy physics, condensed matter, complexity theory, cryptography, signal processing, and many others. It’s an exciting time to be doing quantum physics.

And most importantly, we are open to being wrong. We all know what happens if you try to update your prior by conditioning on an outcome that had zero support. Gil and other quantum computing skeptics like Alicki play a vital role in helping us sharpen our arguments and remove any blind spots in our reasoning. But for now, the arguments against large-scale quantum computation are simply not convincing enough to draw more than an infinitesimal sliver of expert attention, and it’s likely to remain this way unless experimental progress starts to systematically falter or a concrete and consistent competing model of quantum noise is developed.