Should Papers Have Unit Tests?

Perhaps the greatest shock I’ve had in moving from the hallowed halls of academia to the workman depths of everyday software development is the amount of testing that is done when writing code. Likely I’ve written more test code than non-test code over the last three plus years at Google. The most common type of test I write is a “unit test”, in which a small portion of code is tested for correctness (hey Class, do you do what you say?). The second most common type is an “integration test”, which attempts to test that the units working together are functioning properly (hey Server, do you really do what you say?). Testing has many benefits: correctness of code, of course, but it is also important for ease of changing code (refactoring), supporting decoupled and simplified design (untestable code is often a sign that your units are too complicated, or that your units are too tightly coupled), and more.
Over the holiday break, I’ve been working on a paper (old habit, I know) with lots of details that I’d like to make sure I get correct. Throughout the entire paper writing process, one spends a lot of time checking and rechecking the correctness of the arguments. And so the thought came to my mind while writing this paper, “boy it sure would be easier to write this paper if I could write tests to verify my arguments.”
In a larger sense, all papers are a series of tests, small arguments convincing the reader of the veracity or likelihood of the given argument. And testing in a programming environment has a vital distinction that the tests are automated, with the added benefit that you can run them often as you change code and gain confidence that the contracts enforced by the tests have not been broken. But perhaps there would be a benefit to writing a separate argument section with “unit tests” for different portions of a main argument in a paper. Such unit test sections could be small, self-contained, and serve as supplemental reading that could be done to help a reader gain confidence in the claims of the main text.
I think some of the benefits for having a section of “unit tests” in a paper would be

  • Documenting limit tests A common trick of the trade in physics papers is to take a parameter to a limiting value to see how the equations behave. Often one can recover known results in such limits, or show that certain relations hold after you scale these. These types of arguments give you confidence in a result, but are often left out of papers. This is sort of kin to edge case testing by programmers.
  • Small examples When a paper gets abstract, one often spends a lot of time trying to ground oneself by working with small examples (unless you are Grothendieck, of course.) Often one writes a paper by interjecting these examples in the main flow of the paper, but these sort of more naturally fit in a unit testing section.
  • Alternative explanation testing When you read an experimental physics paper, you often wonder, am I really supposed to believe the effect that they are talking about. Often large portions of the paper are devoted to trying to settle such arguments, but when you listen to experimentalists grill each other you find that there is an even further depth to these arguments. “Did you consider that your laser is actually exciting X, and all you’re seeing is Y?” The amount of this that goes on is huge, and sadly, not documented for the greater community.
  • Combinatorial or property checks Often one finds oneself checking that a result works by doing something like counting instances to check that they sum to a total, or that a property holds before and after a transformation (an invariant). While these are useful for providing evidence that an argument is correct, they can often feel a bit out of place in a main argument.

Of course it would be wonderful if there we a way that these little “units” could be automatically executed. But the best path I can think of right now towards getting to that starts with the construction of an artificial mind. (Yeah, I think perhaps I’ve been at Google too long.)

Why I Left Academia

TLDR: scroll here for the pretty interactive picture.
Over two years ago I abandoned my post at the University of Washington as a assistant research professor studying quantum computing and started a new career as a software developer for Google. Back when I was a denizen of the ivory tower I used to daydream that when I left academia I would write a long “Jerry Maguire”-esque piece about the sordid state of the academic world, of my lot in that world, and how unfair and f**ked up it all is. But maybe with less Tom Cruise. You know the text, the standard rebellious view of all young rebels stuck in the machine (without any mirror.) The song “Mad World” has a lyric that I always thought summed up what I thought it would feel like to leave and write such a blog post: “The dreams in which I’m dying are the best I’ve ever had.”
But I never wrote that post. Partially this was because every time I thought about it, the content of that post seemed so run-of-the-mill boring that I feared my friends who read it would never ever come visit me again after they read it. The story of why I left really is not that exciting. Partially because writing a post about why “you left” is about as “you”-centric as you can get, and yes I realize I have a problem with ego-centric ramblings. Partially because I have been busy learning a new career and writing a lot (omg a lot) of code. Partially also because the notion of “why” is one I—as a card carrying ex-Physicist—cherish and I knew that I could not possibly do justice to giving a decent “why” explanation.
Indeed: what would a “why” explanation for a life decision such as the one I faced look like? For many years when I would think about this I would simply think “well it’s complicated and how can I ever?” There are, of course, the many different components that you think about when considering such decisions. But then what do you do with them? Does it make sense to think about them as probabilities? “I chose to go to Caltech, 50 percent because I liked physics, and 50 percent because it produced a lot Nobel prize winners.” That does not seem very satisfying.
Maybe the way to do it is to phrase the decisions in terms of probabilities that I was assigning before making the decision. “The probability that I’ll be able to contribute something to physics will be 20 percent if I go to Caltech versus 10 percent if I go to MIT.” But despite what some economists would like to believe there ain’t no way I ever made most decisions via explicit calculation of my subjective odds. Thinking about decisions in terms of what an actor feels each decision would do to increase his/her chances of success feels better than just blindly associating probabilities to components in a decision, but it also seems like a lie, attributing math where something else is at play.
So what would a good description of the model be? After pondering this for a while I realized I was an idiot (for about the eighth time that day. It was a good day.) The best way to describe how my brain was working is, of course, nothing short than my brain itself. So here, for your amusement, is my brain (sorry, only tested using Chrome). Yes, it is interactive.

Science Code Manifesto

Recently, one of the students here at U. Sydney and I had the frustrating experience of trying to reproduce a numerical result from a paper, but it just wasn’t working. The code used by the authors was regrettably not made publicly available, so once we were fairly sure that our code was correct, we didn’t know how to resolve the discrepancy. Luckily, in our small community, I knew the authors personally and we were able to figure out why the results didn’t match up. But as code becomes a larger and larger part of scientific projects, these sorts of problems will increase in frequency and severity.
What can we do about it?
A team of very smart computer scientists have come together and written the science code manifesto. It is short and sweet; the whole thing boils down to five simple principles of publishing code:

Code
All source code written specifically to process data for a published paper must be available to the reviewers and readers of the paper.
Copyright
The copyright ownership and license of any released source code must be clearly stated.
Citation
Researchers who use or adapt science source code in their research must credit the code’s creators in resulting publications.
Credit
Software contributions must be included in systems of scientific assessment, credit, and recognition.
Curation
Source code must remain available, linked to related materials, for the useful lifetime of the publication.

If you support this, and you want to help contribute to the solution, then please go and endorse the manifesto. Even more importantly, practice the five C’s the next time you publish a paper!

Q-circuit v2.0

Many readers are familiar with the LaTeX package called Q-circuit that I coauthored with Bryan Eastin. If you aren’t familiar with it, it is a set of macros that helps make typesetting quantum circuits easy, efficient and (reasonably) intuitive.  The results are quite beautiful, if I do say so myself, as can be seen in the picture to the left.
In the past year Bryan and I began getting emails from Q-circuit users who were experiencing some bugs. It turns out that the issue was usually an incompatibility between Q-circuit v1.2 and Xy-pic v3.8, an update to a package that Q-circuit relies on heavily.
Thanks to the user feedback and some support from the authors of Xy-pic, we were able to stamp out the bugs. (Probably… no guarantees!) Thus I present to you the latest version of Q-circuit!
Download Q-circuit v2.0
There is more info on the Q-circuit website, where you will find the tutorial, some examples, and you can also enjoy the painfully retro green-on-black motif. (Let the haters hate… I like it.) A few additional technical details:

  1. Nothing has been added to the new version.  It is as near as possible to the old version while still functioning with Xy-pic version 3.8.x.
  2. The old version of Q-circuit works better with Xy-pic version 3.7. (When using Xy-pic 3.7, Q-circuit 2.0 makes PDFs with slightly pixelated curves.)
  3. The arXiv is still using Xy-pic 3.7 and they don’t know when they’ll update to 3.8.

Finally, a big thank you to my coauthor Bryan for putting in so much hard work to make Q-circuit a success!