Sailing Stones: Mystery No More

My first research project, my first research paper, was on a perplexing phenomenon: the sliding rocks of Death Valley’s Racetrack playa. Racetrack playa is a large desolate dry lake bed that has one distinguishing feature above and beyond its amazing flatness. At the south end of the playa are a large number of large rocks (one man size and smaller), and behind these rocks, if you visit in the summer, are long tracks caked into the dried earth of the playa. Apparently these rocks, during the winter months, move and leave these long tracks. I say apparently, because, for many many years, no one had ever seen these rocks move. Until now! The following video makes me extremely happy

This is a shot of one of the playa stones actually moving! This is the end result of a large study that sought to understand the mechanism behind the sliding stones, published recently in PloS one:

In 1993, fresh out of Yreka High School, I found myself surrounded by 200+ geniuses taking Caltech’s first year physics class, Physics 1 (med schools sometimes ask students at Caltech to verify that they know Calculus because the transcripts have just these low numerical course indicators on them, and of course Physics 1 couldn’t actually be physics with calculus, could it?) It would be a lie to say that this wasn’t intimidating: some of the TAs in the class where full physics professors! I remember a test where the average score was 0.5 out of 10 and perhaps it didn’t help that my roommate studied with a Nobel prize winner as a high school student. Or that another freshman in my class was just finishing a paper with his parents on black holes (or that his dad is one of the founders of the theory of inflation!) At times I considered transferring, because that is what all Caltech students do when they realized how hard Caltech is going to be, and also because it wasn’t clear to me what being a physics major got you.

One day in Physics 1 it was announced that there was a class that you could gain entrance to that was structured to teach you not physics, but how to do creative research. Creativity: now this was something I truly valued! It was called Physics 11 and it was run by one Professor Tom Tombrello (I’d later see his schedule on the whiteboard with the abbreviation T2). The only catch was that you had to get accepted into the class and to do this you had to do you best at solving a toy research problem, what the class termed a “hurdle”. The students from the previous class then helped select the new Physics 11 students based upon their performance on the hurdles. The first hurdle also caught my eye: it was a problem based upon the old song Mairzy Doats which my father had weekly sung while showering in the morning. So I set about working on the problem. I don’t remember much of my solution, except that it was long and involved lots of differential equations of increasing complexity. Did I mention that it was long? Really long. I handed in the hurdle, then promptly ran out of time to work on the second hurdle.

Because I’d not handed in the second hurdle, I sort of expected that I’d not get selected into the class. Plus I wasn’t even in the advanced section of physics 1 (the one TAed by the professors, now those kids were well prepared and smart!) But one late night I went to my mailbox, opened it, and found…nothing. I closed it, and then, for some strange reason, thought: hey maybe there is something stuck in there. So I returned and opened the box, dug deep, and pulled out an invitation to join physics 11! This story doesn’t mean much to you, but I can still smell, feel, and hear Caltech when I think of this event. Also I’ve always been under the impression that being accepted to this class was a mistake and really the invitation I got was meant for another student in a mailbox next to mine. But that’s a story for another session on the couch.

So I enrolled in Physics 11. It’s not much of a stretch to say that it was the inspiration for me to go to graduate school, to do a postdoc, and to become a pseudo-professor. Creative research is an amazing drug, and also, I believe, one of the great endeavors of humanity. My small contribution to the racetrack playa story was published in the Journal of Geology:

The basic mystery was what caused these rocks to move. Was it the wind? It seemed hard to get enough force to move the rocks. Was it ice? When you placed stakes around the rocks, some of the rocks moved out of the stakes and some did not. In the above paper we pointed out that a moving layer of water would mean that there was more wind down low that one would normally get because the boundary layer was moving. We also looked for the effect of said boundary layer on the rocks motion and found a small effect.

The answer, however, as to why the rocks moved, turned out to be even more wonderful. Ice sheets dislodged and bashing the rocks forward. A sort of combination of the two competing previous hypothesis! This short documentary explains it nicely

So, another mystery solved! We know more about how the world works, not on a level of fundamental physics, but on a level of, “because it is interesting”, and “because it is fun”, and isn’t that enough? Arthur C. Clarke, who famously gave airtime to these rocks, would, I think, have been very please with this turn of events

QIP 2015

Sydney_skyline_at_dusk_-_Dec_2008
The website is up for QIP 2015, which will be held this year in beautiful Sydney, Australia. Here is a timeline of the relevant dates:

  • Submission of talks deadline: Sep 12, 2014
  • Submission of posters deadline: Oct 25, 2014
  • Decision on talks and posters submitted before talk deadline: Oct 20, 2014
  • Decision on posters submitted after talk deadline: Nov 15, 2014
  • Tutorial Session: Jan 10-11, 2015
  • Main Conference: Jan 12-16, 2015

And students, don’t worry, there are plans to include some student support scholarships, so we hope that many of you can attend. We’re looking forward to seeing you all here!

Elsevier again, and collective action

We all know about higher education being squeezed financially. Government support is falling and tuition is going up. We see academic jobs getting scarcer, and more temporary. The pressure for research to focus on the short term is going up. Some of these changes may be fair, since society always has to balance its immediate priorities against its long-term progress. At other times, like when comparing the NSF’s $7.6 billion FY2014 budget request to the ongoing travesty that is military procurement, it does feel as though we are eating our seed corn for not very wise reasons.
Against this backdrop, the travesty that is scientific publishing may feel like small potatoes. But now we are starting to find out just how many potatoes. Tim Gowers has been doing an impressive job of digging up exactly how much various British universities pay for their Elsevier subscriptions. Here is his current list. Just to pick one random example, the University of Bristol (my former employer), currently pays Elsevier a little over 800,000 pounds (currently $1.35M) for a year’s access to their journals. Presumably almost all research universities pay comparable amounts.
To put this number in perspective, let’s compare it not to the F-35, but to something that delivers similar value: arxiv.org. Its total budget for 2014 is about 750,000 US dollars (depending on how you count overhead), and of course this includes access for the entire world, not only the University of Bristol. To be fair, ScienceDirect has about 12 times as many articles and the median quality is probably higher. But overall it is clearly vastly more expensive for society to have its researchers communicate in this way.
Another way to view the £800,000 price tag is in terms of the salaries of about 40 lecturers ($latex \approx$ assistant professors), or some equivalent mix of administrators, lecturers and full professors. The problem is that these are not substitutes. If Bristol hired 40 lecturers, they would not each spend one month per year building nearly-free open-access platforms and convincing the world to use them; they would go about getting grants, recruiting grad students and publishing in the usual venues. There are problems of collective action, of the path dependence that comes with a reputation economy and of the diffuse costs and concentrated benefits of the current system.
I wish I could end with some more positive things to say. I think at least for now it is worth getting across the idea that there is a crisis, and that we should all do what we can to help with it, especially when we can do so without personal cost. In this way, we can hopefully create new social norms. For example, it is happily unconventional now to not post work on arxiv.org, and I hope that it comes to be seen also as unethical. In the past, it was common to debate whether QIP should have published proceedings. Now major CS conferences are cutting themselves loose from parasitic professional societies (see in particular the 3% vote in favor of the status quo) and QIP has begun debating whether to require all submissions be accompanied by arxiv posts (although this is of course not at all clear-cut). If we cannot have a revolution, hopefully we can at least figure out an evolutionary change towards a better scientific publishing system. And then we can try to improve military procurement.

Quantum computers can work in principle

Gil Kalai has just posted on his blog a series of videos of his lectures entitled “why quantum computers cannot work.”  For those of us that have followed Gil’s position on this issue over the years, the content of the videos is not surprising. The surprising part is the superior production value relative to your typical videotaped lecture (at least for the first overview video).

I think the high gloss on these videos has the potential to sway low-information bystanders into thinking that there really is a debate about whether quantum computing is possible in principle. So let me be clear.

There is no debate! The expert consensus on the evidence is that large-scale quantum computation is possible in principle.

Quoting “expert consensus” like this is an appeal to authority, and my esteemed colleagues will rebuke me for not presenting the evidence. Aram has done an admirable job of presenting the evidence, but the unfortunate debate format distorts perception of the issue by creating the classic “two sides to a story” illusion. I think it’s best to be unequivocal to avoid misunderstanding.

The program that Gil lays forth is a speculative research agenda, devoid of any concrete microscopic physical predictions, and no physicist has investigated any of it because it is currently neither clear enough nor convincing enough. At the same time, it would be extremely interesting if it one day leads to a concrete conjectured model of physics in which quantum computers do not work. To make the ideas more credible, it would help to have a few-qubit model that is at least internally consistent, and even better, one that doesn’t contradict the dozens of on-going experiments. I genuinely hope that Gil or someone else can realize this thrilling possibility someday.

For now, though, the reality is that quantum computation continues to make exciting progress every year, both on theoretical and experimental levels, and we have every reason to believe that this steady progress will continue. Quantum theory firmly predicts (via the fault-tolerance threshold theorem) that large-scale quantum computation should be achievable if noise rates and correlations are low enough, and we are fast approaching the era where the experimentally achievable noise rates begin to touch the most optimistic threshold estimates. In parallel, the field continues to make contributions to other lines of research in high-energy physics, condensed matter, complexity theory, cryptography, signal processing, and many others. It’s an exciting time to be doing quantum physics.

And most importantly, we are open to being wrong. We all know what happens if you try to update your prior by conditioning on an outcome that had zero support. Gil and other quantum computing skeptics like Alicki play a vital role in helping us sharpen our arguments and remove any blind spots in our reasoning. But for now, the arguments against large-scale quantum computation are simply not convincing enough to draw more than an infinitesimal sliver of expert attention, and it’s likely to remain this way unless experimental progress starts to systematically falter or a concrete and consistent competing model of quantum noise is developed.

TQC 2014!

While many of us are just recovering from QIP, I want to mention that the submission deadline is looming for the conference TQC, which perhaps should be called TQCCC because its full name is Theory of Quantum Computation, Communication and Cryptography. Perhaps this isn’t done because it would make the conference seem too classical? But TQQQC wouldn’t work so well either. I digress.
The key thing I want to mention is the imminent 15 Feb submission deadline.
I also want to mention that TQC is continuing to stay ahead of the curve with its open-access author-keeps-copyright proceedings, and this year with some limited open reviewing (details here). I recently spoke to a doctor who complained that despite even her Harvard Medical affiliation, she couldn’t access many relevant journals online. While results of taxpayer-funded research on drug efficacy, new treatments and risk factors remain locked up, at least our community is ensuring that anyone wanting to work on the PPT bound entanglement conjecture will be able to catch up to the research frontier without having to pay $39.95 per article.
One nice feature about these proceedings is that if you later want to publish a longer version of your submission in a journal, then you will not face any barriers from TQC. I also want to explicitly address one concern that some have raised about TQC, which is that the published proceedings will prevent authors from publishing their work elsewhere. For many, the open access proceedings will be a welcome departure from the usual exploitative policies of not only commercial publishers like Elsevier, but also the academic societies like ACM and IEEE. But I know that others will say “I’m happy to sign your petitions, but at the end of the day, I still want to submit my result to PRL” and who am I to argue with this?
So I want to stress that submitting to TQC does not prevent submitting your results elsewhere, e.g. to PRL. If you publish one version in TQC and a substantially different version (i.e. with substantial new material) in PRL, then not only is TQC fine with it, but it is compatible with APS policy which I am quoting here:

Similar exceptions [to the prohibition against double publishing] are generally made for work disclosed earlier in abbreviated or preliminary form in published conference proceedings. In all such cases, however, authors should be certain that the paper submitted to the archival
journal does not contain verbatim text, identical figures or tables, or other copyrighted materials which were part of the earlier publications, without providing a copy of written permission from the copyright holder. [ed: TQC doesn’t require copyright transfer, because it’s not run by people who want to exploit you, so you’re all set here] The paper must also contain a substantial body of new material that was not included in the prior disclosure. Earlier relevant published material should, of course, always be clearly referenced in the new submission.

I cannot help but mention that even this document (the “APS Policy on Prior Disclosure”) is behind a paywall and will cost you $25 if your library doesn’t subscribe. But if you really want to support this machine and submit to PRL or anywhere else (and enjoy another round of refereeing), TQC will not get in your way.
Part of what makes this easy is TQC’s civilized copyright policy (i.e. you keep it). By contrast, Thomas and Umesh had a more difficult, though eventually resolved, situation when combining STOC/FOCS with Nature.

Two Cultures in One of the Cultures

This makes no senseA long time ago in a mental universe far far away I gave a talk to a theory seminar about quantum algorithms. An excerpt from the abstract:

Quantum computers can outperform their classical brethren at a variety of algorithmic tasks….[yadda yadda yadaa deleted]… This talk will assume no prior knowledge of quantum theory…

The other day I was looking at recent or forthcoming interesting quantum talks and I stumbled upon one by a living pontiff:

In this talk, I’ll describe connections between the unique games conjecture (or more precisely, the closely relatedly problem of small-set expansion) and the quantum separability problem… [amazing stuff deleted]…The talk will not assume any knowledge of quantum mechanics, or for that matter, of the unique games conjecture or the Lasserre hierarchy….

And another for a talk to kick off a program at the Simons institute on Hamiltonian complexity (looks totally fantastic, wish I could be a fly on the wall at that one!):

The title of this talk is the name of a program being hosted this semester at the Simons Institute for the Theory of Computing….[description of field of Hamiltonian complexity deleted…] No prior knowledge of quantum mechanics or quantum computation will be assumed.

Talks are tricky. Tailoring your talk to your audience is probably one of the trickier sub-trickinesses of giving a talk. But remind me again, why are we apologizing to theoretical computer scientists / mathematicians (which are likely the audiences for the three talks I linked to) for their ignorance of quantum theory? Imagine theoretical computer science talks coming along with a disclaimer, “no prior knowledge of the PCP theorem is assumed”, “no prior knowledge of polynomial-time approximation schemes is assumed”, etc. Why is it still considered necessary, decades after Shor’s algorithm and error correction showed that quantum computing is indeed a fascinating and important idea in computer science, to apologize to an audience for a large gap in their basic knowledge of the universe?
As a counter argument, I’d love to hear from a non-quantum computing person who was swayed to attend a talk because it said that no prior knowledge of quantum theory is assumed. Has that ever worked? (Or similar claims of a cross cultural prereq swaying you to bravely go where none of your kind has gone before.)

Error correcting aliens

Happy New Year!  To celebrate let’s talk about error correcting codes and….aliens.
The universe, as many have noted, is kind of like a computer.  Or at least our best description of the universe is given by equations that prescribe how various quantities change in time, a lot like a computer program describes how data in a computer changes in time.  Of course, this ignores one of the fundamental differences between our universe and our computers: our computers tend to be able to persist information over long periods of time.  In contrast, the quantities describing our universe tend to have a hard time being recoverable after even a short amount of time: the location (wave function) of an atom, unless carefully controlled, is impacted by an environment that quickly makes it impossible to recover the initial bits (qubits) of the location of the atom. 
Computers, then, are special objects in our universe, ones that persist and allow manipulation of long lived bits of information.  A lot like life.  The bits describing me, the structural stuff of my bones, skin, and muscles, the more concretely information theoretic bits of my grumbly personality and memories, the DNA describing how to build a new version of me, are all pieces of information that persist over what we call a lifetime, despite the constant gnaw of second law armed foes that would transform me into unliving goo.  Maintaining my bits in the face of phase transitions, viruses, bowel obstructions, cardiac arrest, car accidents, and bears is what human life is all about, and we increasingly do it well, with life expectancy now approaching 80 years in many parts of the world.
But 80 years is not that long.  Our universe is 13.8ish billion years old, or about 170ish million current lucky human’s life expectancies.  Most of us would, all things equal, like to live even longer.  We’re not big fans of death.  So what obstacles are there toward life extension?  Yadda yadda biology squishy stuff, yes.  Not qualified to go there so I won’t.  But since life is like a computer in regards to maintaining information, we can look toward our understanding of what allows computers to preserve information…and extrapolate!
Enter error correction.  If bits are subject to processes that flip the values of the bits, then you’ll lose information.  If, however, we give up storing information in each individual bit and instead store single bits across multiple individual noisy bits, we can make this spread out bit live much longer.  Instead of saying 0, and watching it decay to unknown value, say 000…00, 0 many times, and ask if the majority of these values remain 0.  Viola you’ve got an error correcting code.  Your smeared out information will be preserved longer, but, and here is the important point, at the cost of using more bits.
Formalizing this a bit, there are a class of beautiful theorems, due originally to von Neumann, classically, and a host of others, quantumly, called the threshold theorems for fault-tolerant computing which tell you, given an model for how errors occur, how fast they occur, and how fast you can compute, whether you can reliably compute. Roughly these theorems all say something like: if your error rate is below some threshold, then if you want to compute while failing at a particular better rate, then you can do this using a complicated larger construction that is larger proportional to a polynomial in the log of inverse of the error rate you wish to achieve. What I’d like to pull out of these theorems for talking about life are two things: 1) there is an overhead to reliably compute, this overhead is both larger, in size, and takes longer, in time, to compute and 2) the scaling of this overhead depends crucially on the error model assumed.
Which leads back to life. If it is a crucial part of life to preserve information, to keep your bits moving down the timeline, then it seems that the threshold theorems will have something, ultimately, to say about extending your lifespan. What are the error models and what are the fundamental error rates of current human life? Is there a threshold theorem for life? I’m not sure we understand biology well enough to pin this down yet, but I do believe we can use the above discussion to extrapolate about our future evolution.
Or, because witnessing evolution of humans out of their present state seemingly requires waiting a really long time, or technology we currently don’t have, let’s apply this to…aliens. 13.8 billion years is a long time. It now looks like there are lots of planets. If intelligent life arose on those planets billions of years ago, then it is likely that it has also had billions of years to evolve past the level of intelligence that infects our current human era. Which is to say it seems like any such hypothetical aliens would have had time to push the boundaries of the threshold theorem for life. They would have manipulated and technologically engineered themselves into beings that live for a long period of time. They would have applied the constructions of the threshold theorem for life to themselves, lengthening their life by apply principles of fault-tolerant computing.
As we’ve witnessed over the last century, intelligent life seems to hit a point in its life where rapid technological change occurs. Supposing that the period of time in which life spends going from reproducing, not intelligent stuff, to megalords of technological magic in which the life can modify itself and apply the constructions of the threshold theorem for life, is fast, then it seems that most life will be found at the two ends of the spectrum, unthinking goo, or creatures who have climbed the threshold theorem for life to extend their lifespans to extremely long lifetimes. Which lets us think about what alien intelligent life would look like: it will be pushing the boundaries of using the threshold theorem for life.
Which lets us make predictions about what this advanced alien life would look life. First, and probably most importantly, it would be slow. We know that our own biology operates at an error rate that ends up being about 80 years. If we want to extend this further, then taking our guidance from the threshold theorems of computation, we will have to use larger constructions and slower constructions in order to extend this lifetime. And, another important point, we have to do this for each new error model which comes to dominate our death rate. That is, today, cardiac arrest kills the highest percentage of people. This is one error model, so to speak. Once you’ve conquered it, you can go down the line, thinking about error models like earthquakes, falls off cliffs, etc. So, likely, if you want to live a long time, you won’t just be slightly slow compared to our current biological life, but instead extremely slow. And large.
And now you can see my resolution to the Fermi paradox. The Fermi paradox is a fancy way of saying “where are the (intelligent) aliens?” Perhaps we have not found intelligent life because the natural fixed point of intelligent evolution is to produce entities for which our 80 year lifespans is not even a fraction of one of their basic clock cycles. Perhaps we don’t see aliens because, unless you catch life in the short transition from unthinking goo to masters of the universe, the aliens are just operating on too slow a timescale. To discover aliens, we must correlate observations over a long timespan, something we have not yet had the tools and time to do. Even more interesting the threshold theorems also have you spread your information out across a large number of individually erring sub-systems. So not only do you have to look at longer time scales, you also need to make correlated observations over larger and larger systems. Individual bits in error correcting codes look as noisy as before, it is only in the aggregate that information is preserved over longer timespans. So not only do we have too look slower, we need to do so over larger chunks of space. We don’t see aliens, dear Fermi, because we are young and impatient.
And about those error models. Our medical technology is valiantly tackling a long list of human concerns. But those advanced aliens, what kind of error models are they most concerned with? Might I suggest that among those error models, on the long list of things that might not have been fixed by their current setup, the things that end up limiting their error rate, might not we be on that list? So quick, up the ladder of threshold theorems for life, before we end up an error model in some more advanced creatures slow intelligent mind!

Are articles in high-impact journals more like designer handbags, or monarch butterflies?

Monarch butterfly handbag
Monarch butterfly handbag (src)

US biologist Randy Schekman, who shared this year’s physiology and medicine Nobel prize, has made prompt use of his new bully pulpit. In
How journals like Nature, Cell and Science are damaging science: The incentives offered by top journals distort science, just as big bonuses distort banking
he singled out these “luxury” journals as a particularly harmful part of the current milieu in which “the biggest rewards follow the flashiest work, not the best,” and he vowed no longer to publish in them. An accompanying Guardian article includes defensive quotes from representatives of Science and Nature, especially in response to Schekman’s assertions that the journals favor controversial articles over boring but scientifically more important ones like replication studies, and that they deliberately seek to boost their impact factors by restricting the number of articles published, “like fashion designers who create limited-edition handbags or suits.”  Focusing on journals, his main concrete suggestion is to increase the role of open-access online journals like his elife, supported by philanthropic foundations rather than subscriptions. But Schekman acknowledges that blame extends to funding organizations and universities, which use publication in high-impact-factor journals as a flawed proxy for quality, and to scientists who succumb to the perverse incentives to put career advancement ahead of good science.  Similar points were made last year in Serge Haroche’s thoughtful piece on why it’s harder to do good science now than in his youth.   This, and Nature‘s recent story on Brazilian journals’ manipulation of impact factor statistics, illustrate how prestige journals are part of the solution as well as the problem.
Weary of people and institutions competing for the moral high ground in a complex terrain, I sought a less value-laden approach,  in which scientists, universities, and journals would be viewed merely as interacting IGUSes (information gathering and utilizing systems), operating with incomplete information about one another. In such an environment, reliance on proxies is inevitable, and the evolution of false advertising is a phenomenon to be studied rather than disparaged.  A review article on biological mimicry introduced me to some of the refreshingly blunt standard terminology of that field.  Mimicry,  it said,  involves three roles:  a model,  i.e.,  a living or material agent emitting perceptible signals, a mimic that plagiarizes the model, and a dupe whose senses are receptive to the model’s signal and which is thus deceived by the mimic’s similar signals.  As in human affairs, it is not uncommon for a single player to perform several of these roles simultaneously.

QIP 2014 accepted talks

This Thanksgiving, even if we can’t all be fortunate enough to be presenting a talk at QIP, we can be thankful for being part of a vibrant research community with so many different lines of work going on. The QIP 2014 accepted talks are now posted with 36 out of 222 accepted. While many of the hot topics of yesteryear (hidden subgroup problem, capacity of the depolarizing channel) have fallen by the wayside, there is still good work happening in the old core topics (algorithms, information theory, complexity, coding, Bell inequalities) and in topics that have moved into the mainstream (untrusted devices, topological order, Hamiltonian complexity).

Here is a list of talks, loosely categorized by topic (inspired by Thomas’s list from last year). I’m pretty sad about missing my first QIP since I joined the field, because its unusually late timing overlaps the first week of the semester at MIT. But in advance of the talks, I’ll write a few words (in italics) about what I would be excited about hearing if I were there.

Quantum Shannon Theory

There are a lot of new entropies! Some of these may be useful – at first for tackling strong converses, but eventually maybe for other applications as well. Others may, well, just contribute to the entropy of the universe. The bounds on entanglement rate of Hamiltonians are exciting, and looking at them, I wonder why it took so long for us to find them.

1a. A new quantum generalization of the Rényi divergence with applications to the strong converse in quantum channel coding
Frédéric Dupuis, Serge Fehr, Martin Müller-Lennert, Oleg Szehr, Marco Tomamichel, Mark Wilde, Andreas Winter and Dong Yang. 1306.3142 1306.1586
merged with
1b. Quantum hypothesis testing and the operational interpretation of the quantum Renyi divergences
Milan Mosonyi and Tomohiro Ogawa. 1309.3228

25. Zero-error source-channel coding with entanglement
Jop Briet, Harry Buhrman, Monique Laurent, Teresa Piovesan and Giannicola Scarpa. 1308.4283

28. Bound entangled states with secret key and their classical counterpart
Maris Ozols, Graeme Smith and John A. Smolin. 1305.0848
It’s funny how bound key is a topic for quant-ph, even though it is something that is in principle purely a classical question. I think this probably is because of Charlie’s influence. (Note that this particular paper is mostly quantum.)

3a. Entanglement rates and area laws
Michaël Mariën, Karel Van Acoleyen and Frank Verstraete. 1304.5931 (This one could also be put in the condensed-matter category.)
merged with
3b. Quantum skew divergence
Koenraad Audenaert. 1304.5935

22. Quantum subdivision capacities and continuous-time quantum coding
Alexander Müller-Hermes, David Reeb and Michael Wolf. 1310.2856

Quantum Algorithms

This first paper is something I tried (unsuccessfully, needless to say) to disprove for a long time. I still think that this paper contains yet-undigested clues about the difficulties of non-FT simulations.

2a. Exponential improvement in precision for Hamiltonian-evolution simulation
Dominic Berry, Richard Cleve and Rolando Somma. 1308.5424
merged with
2b. Quantum simulation of sparse Hamiltonians and continuous queries with optimal error dependence
Andrew Childs and Robin Kothari.
update: The papers appear now to be merged. The joint (five-author) paper is 1312.1414.

35. Nested quantum walk
Andrew Childs, Stacey Jeffery, Robin Kothari and Frederic Magniez.
(not sure about arxiv # – maybe this is a generalization of 1302.7316?)

Quantum games: from Bell inequalities to Tsirelson inequalities

It is interesting how the first generation of quantum information results is about showing the power of entanglement, and now we are all trying to limit the power of entanglement. These papers are, in a sense, about toy problems. But I think the math of Tsirelson-type inequalities is going to be important in the future. For example, the monogamy bounds that I’ve recently become obsessed with can be seen as upper bounds on the entangled value of symmetrized games.
4a. Binary constraint system games and locally commutative reductions
Zhengfeng Ji. 1310.3794
merged with
4b. Characterization of binary constraint system games
Richard Cleve and Rajat Mittal. 1209.2729

20. A parallel repetition theorem for entangled projection games
Irit Dinur, David Steurer and Thomas Vidick. 1310.4113

33. Parallel repetition of entangled games with exponential decay via the superposed information cost
André Chailloux and Giannicola Scarpa. 1310.7787

Untrusted randomness generation

Somehow self-testing has exploded! There is a lot of information theory here, but the convex geometry of conditional probability distributions also is relevant, and it will be interesting to see more connections here in the future.

5a. Self-testing quantum dice certified by an uncertainty principle
Carl Miller and Yaoyun Shi.
merged with
5b. Robust device-independent randomness amplification from any min-entropy source
Kai-Min Chung, Yaoyun Shi and Xiaodi Wu.

19. Infinite randomness expansion and amplification with a constant number of devices
Matthew Coudron and Henry Yuen. 1310.6755

29. Robust device-independent randomness amplification with few devices
Fernando Brandao, Ravishankar Ramanathan, Andrzej Grudka, Karol Horodecki, Michal Horodecki and Pawel Horodecki. 1310.4544

The fuzzy area between quantum complexity theory, quantum algorithms and classical simulation of quantum systems. (But not Hamiltonian complexity.)

I had a bit of trouble categorizing these, and also in deciding how surprising I should find each of the results. I am also somewhat embarrassed about still not really knowing exactly what a quantum double is.

6. Quantum interactive proofs and the complexity of entanglement detection
Kevin Milner, Gus Gutoski, Patrick Hayden and Mark Wilde. 1308.5788

7. Quantum Fourier transforms and the complexity of link invariants for quantum doubles of finite groups
Hari Krovi and Alexander Russell. 1210.1550

16. Purifications of multipartite states: limitations and constructive methods
Gemma De Las Cuevas, Norbert Schuch, David Pérez-García and J. Ignacio Cirac.

Hamiltonian complexity started as a branch of quantum complexity theory but by now has mostly devoured its host

A lot of exciting results. The poly-time algorithm for 1D Hamiltonians appears not quite ready for practice yet, but I think it is close. The Cubitt-Montanaro classification theorem brings new focus to transverse-field Ising, and to the weird world of stoquastic Hamiltonians (along which lines I think the strengths of stoquastic adiabatic evolution deserve more attention). The other papers each do more or less what we expect, but introduce a lot of technical tools that will likely see more use in the coming years.

13. A polynomial-time algorithm for the ground state of 1D gapped local Hamiltonians
Zeph Landau, Umesh Vazirani and Thomas Vidick. 1307.5143

15. Classification of the complexity of local Hamiltonian problems
Toby Cubitt and Ashley Montanaro. 1311.3161

30. Undecidability of the spectral gap
Toby Cubitt, David Pérez-García and Michael Wolf.

23. The Bose-Hubbard model is QMA-complete
Andrew M. Childs, David Gosset and Zak Webb. 1311.3297

24. Quantum 3-SAT is QMA1-complete
David Gosset and Daniel Nagaj. 1302.0290

26. Quantum locally testable codes
Dorit Aharonov and Lior Eldar. (also QECC) 1310.5664

Codes with spatially local generators aka topological order aka quantum Kitaev theory

If a theorist is going to make some awesome contribution to building a quantum computer, it will probably be via this category. Yoshida’s paper is very exciting, although I think the big breakthroughs here were in Haah’s still underappreciated masterpiece. Kim’s work gives operational meaning to the topological entanglement entropy, a quantity I had always viewed with perhaps undeserved skepticism. It too was partially anticipated by an earlier paper, by Osborne.

8. Classical and quantum fractal code
Beni Yoshida. I think this title is a QIP-friendly rebranding of 1302.6248

21. Long-range entanglement is necessary for a topological storage of information
Isaac Kim. 1304.3925

Bit commitment is still impossible (sorry Horace Yuen) but information-theoretic two-party crypto is alive and well

The math in this area is getting nicer, and the protocols more realistic. The most unrealistic thing about two-party crypto is probably the idea that it would ever be used, when people either don’t care about security or don’t even trust NIST not to be a tool of the NSA.

10. Entanglement sampling and applications
Frédéric Dupuis, Omar Fawzi and Stephanie Wehner. 1305.1316

36. Single-shot security for one-time memories in the isolated qubits model
Yi-Kai Liu. 1304.5007

Communication complexity

It is interesting how quantum and classical techniques are not so far apart for many of these problems, in part because classical TCS is approaching so many problem using norms, SDPs, Banach spaces, random matrices, etc.

12. Efficient quantum protocols for XOR functions
Shengyu Zhang. 1307.6738

9. Noisy Interactive quantum communication
Gilles Brassard, Ashwin Nayak, Alain Tapp, Dave Touchette and Falk Unger. 1309.2643 (also info th / coding)

Foundations

I hate to be a Philistine, but I wonder what disaster would befall us if there WERE a psi-epistemic model that worked. Apart from being able to prove false statements. Maybe a commenter can help?

14. No psi-epistemic model can explain the indistinguishability of quantum states
Eric Cavalcanti, Jonathan Barrett, Raymond Lal and Owen Maroney. 1310.8302

??? but it’s from THE RAT

update: A source on the PC says that this is an intriguing mixture of foundations and Bell inequalities, again in the “Tsirelson regime” of exploring the boundary between quantum and non-signaling.
17. Almost quantum
Miguel Navascues, Yelena Guryanova, Matty Hoban and Antonio Acín.

FTQC/QECC/papers Kalai should read 🙂

I love the part where Daniel promises not to cheat. Even though I am not actively researching in this area, I think the race between surface codes and concatenated codes is pretty exciting.

18. What is the overhead required for fault-tolerant quantum computation?
Daniel Gottesman. 1310.2984

27. Universal fault-tolerant quantum computation with only transversal gates and error correction
Adam Paetznick and Ben Reichardt. 1304.3709

Once we equilibrate, we will still spend a fraction of our time discussing thermodynamics and quantum Markov chains

I love just how diverse and deep this category is. There are many specific questions that would be great to know about, and the fact that big general questions are still being solved is a sign of how far we still have to go. I enjoyed seeing the Brown-Fawzi paper solve problems that stumped me in my earlier work on the subject, and I was also impressed by the Cubitt et al paper being able to make a new and basic statement about classical Markov chains. The other two made me happy through their “the more entropies the better” approach to the world.

31. An improved Landauer Principle with finite-size corrections and applications to statistical physics
David Reeb and Michael M. Wolf. 1306.4352

32. The second laws of quantum thermodynamics
Fernando Brandao, Michal Horodecki, Jonathan Oppenheim, Nelly Ng and Stephanie Wehner. 1305.5278

34. Decoupling with random quantum circuits
Winton Brown and Omar Fawzi. 1307.0632

11. Stability of local quantum dissipative systems
Toby Cubitt, Angelo Lucia, Spyridon Michalakis and David Pérez-García. 1303.4744

Portrait of an Academic at a Midlife Crisis

Citations are the currency of academia. But the currency of your heart is another thing altogether. With apologies to my co-authors, here is a plot of my paper citations versus my own subjective rating of the paper. Hover over the circles to see the paper title, citations per year, and score. Click to see the actual paper. (I’ve only included papers that appear on the arxiv.)

If I were an economist I suppose at this point I would fit a sloping line through the data and claim victory. But being a lowly software developer, its more interesting to me to give a more anecdotal treatment of the data.

  • The paper that I love the most has, as of today, exactly zero citations! Why do I love that paper? Not because it’s practical (far from it.) Not because it proves things to an absolute T (just ask the beloved referees of that paper.) But I love it because it says there is the possibility that there exists a material that quantum computes in a most peculiar manner. In particular the paper argues that it is possible to have devices where: quantum information starts on one side of device, you turn on a field over the device, and “bam!” the quantum information is now on the other side of the material with a quantum circuit applied to it. How f’in cool is that! I think its just wonderful, and had I stuck around the hallowed halls, I probably would still be yelling about how cool it is, much to the dismay of my friends and colleagues (especially those for which the use of the word adiabatic causes their brain to go spazo!)
  • Three papers I was lucky enough to be involved in as a graduate student, wherein we showed how exchange interactions alone could quantum compute, have generated lots of citations. But the citations correlate poorly with my score. Why? Because it’s my score! Haha! In particular the paper I love the most out of this series is not the best written, most deep, practical, or highly cited. It’s the paper where we first showed that exchange alone was universal for quantum computing. The construction in the paper has large warts on it, but it was the paper where I think I first participated in a process where I felt like we knew something about the possibilities of building a quantum computer that others had not quite exactly thought of. And that feeling is wonderful and is why that paper has such a high subjective score.
  • It’s hard not to look this decades worth of theory papers and be dismayed about how far they are from real implementation. I think that is why I like Coherence-preserving quantum bit and Adiabatic Quantum Teleportation. Both of these are super simple and always felt like if I could just get an experimentalist drunk enough excited enough they might try implement that damn thing. The first shows a way to make a qubit that should be more robust to errors because its ground state is in an error detecting state. The second shows a way to get quantum information to move between three qubits using a simple adiabatic procedure related to quantum teleportation. I still hope someday to see these executed on a functioning quantum computer, and I wonder how I’d feel about them should that happen.