Should Papers Have Unit Tests?

Perhaps the greatest shock I’ve had in moving from the hallowed halls of academia to the workman depths of everyday software development is the amount of testing that is done when writing code. Likely I’ve written more test code than non-test code over the last three plus years at Google. The most common type of test I write is a “unit test”, in which a small portion of code is tested for correctness (hey Class, do you do what you say?). The second most common type is an “integration test”, which attempts to test that the units working together are functioning properly (hey Server, do you really do what you say?). Testing has many benefits: correctness of code, of course, but it is also important for ease of changing code (refactoring), supporting decoupled and simplified design (untestable code is often a sign that your units are too complicated, or that your units are too tightly coupled), and more.
Over the holiday break, I’ve been working on a paper (old habit, I know) with lots of details that I’d like to make sure I get correct. Throughout the entire paper writing process, one spends a lot of time checking and rechecking the correctness of the arguments. And so the thought came to my mind while writing this paper, “boy it sure would be easier to write this paper if I could write tests to verify my arguments.”
In a larger sense, all papers are a series of tests, small arguments convincing the reader of the veracity or likelihood of the given argument. And testing in a programming environment has a vital distinction that the tests are automated, with the added benefit that you can run them often as you change code and gain confidence that the contracts enforced by the tests have not been broken. But perhaps there would be a benefit to writing a separate argument section with “unit tests” for different portions of a main argument in a paper. Such unit test sections could be small, self-contained, and serve as supplemental reading that could be done to help a reader gain confidence in the claims of the main text.
I think some of the benefits for having a section of “unit tests” in a paper would be

  • Documenting limit tests A common trick of the trade in physics papers is to take a parameter to a limiting value to see how the equations behave. Often one can recover known results in such limits, or show that certain relations hold after you scale these. These types of arguments give you confidence in a result, but are often left out of papers. This is sort of kin to edge case testing by programmers.
  • Small examples When a paper gets abstract, one often spends a lot of time trying to ground oneself by working with small examples (unless you are Grothendieck, of course.) Often one writes a paper by interjecting these examples in the main flow of the paper, but these sort of more naturally fit in a unit testing section.
  • Alternative explanation testing When you read an experimental physics paper, you often wonder, am I really supposed to believe the effect that they are talking about. Often large portions of the paper are devoted to trying to settle such arguments, but when you listen to experimentalists grill each other you find that there is an even further depth to these arguments. “Did you consider that your laser is actually exciting X, and all you’re seeing is Y?” The amount of this that goes on is huge, and sadly, not documented for the greater community.
  • Combinatorial or property checks Often one finds oneself checking that a result works by doing something like counting instances to check that they sum to a total, or that a property holds before and after a transformation (an invariant). While these are useful for providing evidence that an argument is correct, they can often feel a bit out of place in a main argument.

Of course it would be wonderful if there we a way that these little “units” could be automatically executed. But the best path I can think of right now towards getting to that starts with the construction of an artificial mind. (Yeah, I think perhaps I’ve been at Google too long.)

Two Cultures in One of the Cultures

This makes no senseA long time ago in a mental universe far far away I gave a talk to a theory seminar about quantum algorithms. An excerpt from the abstract:

Quantum computers can outperform their classical brethren at a variety of algorithmic tasks….[yadda yadda yadaa deleted]… This talk will assume no prior knowledge of quantum theory…

The other day I was looking at recent or forthcoming interesting quantum talks and I stumbled upon one by a living pontiff:

In this talk, I’ll describe connections between the unique games conjecture (or more precisely, the closely relatedly problem of small-set expansion) and the quantum separability problem… [amazing stuff deleted]…The talk will not assume any knowledge of quantum mechanics, or for that matter, of the unique games conjecture or the Lasserre hierarchy….

And another for a talk to kick off a program at the Simons institute on Hamiltonian complexity (looks totally fantastic, wish I could be a fly on the wall at that one!):

The title of this talk is the name of a program being hosted this semester at the Simons Institute for the Theory of Computing….[description of field of Hamiltonian complexity deleted…] No prior knowledge of quantum mechanics or quantum computation will be assumed.

Talks are tricky. Tailoring your talk to your audience is probably one of the trickier sub-trickinesses of giving a talk. But remind me again, why are we apologizing to theoretical computer scientists / mathematicians (which are likely the audiences for the three talks I linked to) for their ignorance of quantum theory? Imagine theoretical computer science talks coming along with a disclaimer, “no prior knowledge of the PCP theorem is assumed”, “no prior knowledge of polynomial-time approximation schemes is assumed”, etc. Why is it still considered necessary, decades after Shor’s algorithm and error correction showed that quantum computing is indeed a fascinating and important idea in computer science, to apologize to an audience for a large gap in their basic knowledge of the universe?
As a counter argument, I’d love to hear from a non-quantum computing person who was swayed to attend a talk because it said that no prior knowledge of quantum theory is assumed. Has that ever worked? (Or similar claims of a cross cultural prereq swaying you to bravely go where none of your kind has gone before.)

Error correcting aliens

Happy New Year!  To celebrate let’s talk about error correcting codes and….aliens.
The universe, as many have noted, is kind of like a computer.  Or at least our best description of the universe is given by equations that prescribe how various quantities change in time, a lot like a computer program describes how data in a computer changes in time.  Of course, this ignores one of the fundamental differences between our universe and our computers: our computers tend to be able to persist information over long periods of time.  In contrast, the quantities describing our universe tend to have a hard time being recoverable after even a short amount of time: the location (wave function) of an atom, unless carefully controlled, is impacted by an environment that quickly makes it impossible to recover the initial bits (qubits) of the location of the atom. 
Computers, then, are special objects in our universe, ones that persist and allow manipulation of long lived bits of information.  A lot like life.  The bits describing me, the structural stuff of my bones, skin, and muscles, the more concretely information theoretic bits of my grumbly personality and memories, the DNA describing how to build a new version of me, are all pieces of information that persist over what we call a lifetime, despite the constant gnaw of second law armed foes that would transform me into unliving goo.  Maintaining my bits in the face of phase transitions, viruses, bowel obstructions, cardiac arrest, car accidents, and bears is what human life is all about, and we increasingly do it well, with life expectancy now approaching 80 years in many parts of the world.
But 80 years is not that long.  Our universe is 13.8ish billion years old, or about 170ish million current lucky human’s life expectancies.  Most of us would, all things equal, like to live even longer.  We’re not big fans of death.  So what obstacles are there toward life extension?  Yadda yadda biology squishy stuff, yes.  Not qualified to go there so I won’t.  But since life is like a computer in regards to maintaining information, we can look toward our understanding of what allows computers to preserve information…and extrapolate!
Enter error correction.  If bits are subject to processes that flip the values of the bits, then you’ll lose information.  If, however, we give up storing information in each individual bit and instead store single bits across multiple individual noisy bits, we can make this spread out bit live much longer.  Instead of saying 0, and watching it decay to unknown value, say 000…00, 0 many times, and ask if the majority of these values remain 0.  Viola you’ve got an error correcting code.  Your smeared out information will be preserved longer, but, and here is the important point, at the cost of using more bits.
Formalizing this a bit, there are a class of beautiful theorems, due originally to von Neumann, classically, and a host of others, quantumly, called the threshold theorems for fault-tolerant computing which tell you, given an model for how errors occur, how fast they occur, and how fast you can compute, whether you can reliably compute. Roughly these theorems all say something like: if your error rate is below some threshold, then if you want to compute while failing at a particular better rate, then you can do this using a complicated larger construction that is larger proportional to a polynomial in the log of inverse of the error rate you wish to achieve. What I’d like to pull out of these theorems for talking about life are two things: 1) there is an overhead to reliably compute, this overhead is both larger, in size, and takes longer, in time, to compute and 2) the scaling of this overhead depends crucially on the error model assumed.
Which leads back to life. If it is a crucial part of life to preserve information, to keep your bits moving down the timeline, then it seems that the threshold theorems will have something, ultimately, to say about extending your lifespan. What are the error models and what are the fundamental error rates of current human life? Is there a threshold theorem for life? I’m not sure we understand biology well enough to pin this down yet, but I do believe we can use the above discussion to extrapolate about our future evolution.
Or, because witnessing evolution of humans out of their present state seemingly requires waiting a really long time, or technology we currently don’t have, let’s apply this to…aliens. 13.8 billion years is a long time. It now looks like there are lots of planets. If intelligent life arose on those planets billions of years ago, then it is likely that it has also had billions of years to evolve past the level of intelligence that infects our current human era. Which is to say it seems like any such hypothetical aliens would have had time to push the boundaries of the threshold theorem for life. They would have manipulated and technologically engineered themselves into beings that live for a long period of time. They would have applied the constructions of the threshold theorem for life to themselves, lengthening their life by apply principles of fault-tolerant computing.
As we’ve witnessed over the last century, intelligent life seems to hit a point in its life where rapid technological change occurs. Supposing that the period of time in which life spends going from reproducing, not intelligent stuff, to megalords of technological magic in which the life can modify itself and apply the constructions of the threshold theorem for life, is fast, then it seems that most life will be found at the two ends of the spectrum, unthinking goo, or creatures who have climbed the threshold theorem for life to extend their lifespans to extremely long lifetimes. Which lets us think about what alien intelligent life would look like: it will be pushing the boundaries of using the threshold theorem for life.
Which lets us make predictions about what this advanced alien life would look life. First, and probably most importantly, it would be slow. We know that our own biology operates at an error rate that ends up being about 80 years. If we want to extend this further, then taking our guidance from the threshold theorems of computation, we will have to use larger constructions and slower constructions in order to extend this lifetime. And, another important point, we have to do this for each new error model which comes to dominate our death rate. That is, today, cardiac arrest kills the highest percentage of people. This is one error model, so to speak. Once you’ve conquered it, you can go down the line, thinking about error models like earthquakes, falls off cliffs, etc. So, likely, if you want to live a long time, you won’t just be slightly slow compared to our current biological life, but instead extremely slow. And large.
And now you can see my resolution to the Fermi paradox. The Fermi paradox is a fancy way of saying “where are the (intelligent) aliens?” Perhaps we have not found intelligent life because the natural fixed point of intelligent evolution is to produce entities for which our 80 year lifespans is not even a fraction of one of their basic clock cycles. Perhaps we don’t see aliens because, unless you catch life in the short transition from unthinking goo to masters of the universe, the aliens are just operating on too slow a timescale. To discover aliens, we must correlate observations over a long timespan, something we have not yet had the tools and time to do. Even more interesting the threshold theorems also have you spread your information out across a large number of individually erring sub-systems. So not only do you have to look at longer time scales, you also need to make correlated observations over larger and larger systems. Individual bits in error correcting codes look as noisy as before, it is only in the aggregate that information is preserved over longer timespans. So not only do we have too look slower, we need to do so over larger chunks of space. We don’t see aliens, dear Fermi, because we are young and impatient.
And about those error models. Our medical technology is valiantly tackling a long list of human concerns. But those advanced aliens, what kind of error models are they most concerned with? Might I suggest that among those error models, on the long list of things that might not have been fixed by their current setup, the things that end up limiting their error rate, might not we be on that list? So quick, up the ladder of threshold theorems for life, before we end up an error model in some more advanced creatures slow intelligent mind!

Why I Left Academia

TLDR: scroll here for the pretty interactive picture.
Over two years ago I abandoned my post at the University of Washington as a assistant research professor studying quantum computing and started a new career as a software developer for Google. Back when I was a denizen of the ivory tower I used to daydream that when I left academia I would write a long “Jerry Maguire”-esque piece about the sordid state of the academic world, of my lot in that world, and how unfair and f**ked up it all is. But maybe with less Tom Cruise. You know the text, the standard rebellious view of all young rebels stuck in the machine (without any mirror.) The song “Mad World” has a lyric that I always thought summed up what I thought it would feel like to leave and write such a blog post: “The dreams in which I’m dying are the best I’ve ever had.”
But I never wrote that post. Partially this was because every time I thought about it, the content of that post seemed so run-of-the-mill boring that I feared my friends who read it would never ever come visit me again after they read it. The story of why I left really is not that exciting. Partially because writing a post about why “you left” is about as “you”-centric as you can get, and yes I realize I have a problem with ego-centric ramblings. Partially because I have been busy learning a new career and writing a lot (omg a lot) of code. Partially also because the notion of “why” is one I—as a card carrying ex-Physicist—cherish and I knew that I could not possibly do justice to giving a decent “why” explanation.
Indeed: what would a “why” explanation for a life decision such as the one I faced look like? For many years when I would think about this I would simply think “well it’s complicated and how can I ever?” There are, of course, the many different components that you think about when considering such decisions. But then what do you do with them? Does it make sense to think about them as probabilities? “I chose to go to Caltech, 50 percent because I liked physics, and 50 percent because it produced a lot Nobel prize winners.” That does not seem very satisfying.
Maybe the way to do it is to phrase the decisions in terms of probabilities that I was assigning before making the decision. “The probability that I’ll be able to contribute something to physics will be 20 percent if I go to Caltech versus 10 percent if I go to MIT.” But despite what some economists would like to believe there ain’t no way I ever made most decisions via explicit calculation of my subjective odds. Thinking about decisions in terms of what an actor feels each decision would do to increase his/her chances of success feels better than just blindly associating probabilities to components in a decision, but it also seems like a lie, attributing math where something else is at play.
So what would a good description of the model be? After pondering this for a while I realized I was an idiot (for about the eighth time that day. It was a good day.) The best way to describe how my brain was working is, of course, nothing short than my brain itself. So here, for your amusement, is my brain (sorry, only tested using Chrome). Yes, it is interactive.

Cosmology meets Philanthropy — guest post by Jess Riedel

My colleague Jess Riedel recently attended a conference  exploring the connection between these seemingly disparate subjects, which led him to compose the following essay.–CHB

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.
I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”
Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.
These resources can probably be put to better use.  Bostrom points out that, assuming we don’t destroy ourselves, our descendants likely will one day spread through the universe.  Delaying our colonization of the Virgo Supercluster by one second forgoes about 10^{16} human life-years. Each year, on average, an entire galaxywith its billions of starsis slipping outside of our cosmological event horizon, forever separating it from Earth-originating life.  Maybe we should get on with it?
But the careful reader will note that not everyone believes the supply of negentropy is well understood or even necessarily fixed, especially given the open questions in general relativity, cosmology, quantum mechanics, and (recently) black holes.  Changes in our understanding of these and other issues could have deep implications for the future.  And, as we shall see, for what we do tomorrow.
On the other side of the pond, two young investment analysts at Bridgewater Associates got interested in giving some of their new disposable income to charity. Naturally, they wanted to get something for their investment, and so they went looking for information about what charity would get them the most bang for their buck.   But it turned out that not too many people in the philanthropic world seemed to have many good answer.  A casual observer would even be forgiven for thinking that nobody really cared about what was actually getting done with the quarter trillion donated annually to charity.  And this is no small matter; as measured by just about any metric you choose—lives saved, seals unclubbed, children dewormed—charities vary by many orders of magnitude in efficiency.
This prompted them to start GiveWell, now considered by many esteemed commentators to be the premier charity evaluator.  One such commentator is Princeton philosopher Peter Singer, who proposed the famous thought experiment of the drowning child.  Singer is also actively involved with a larger movement that these days goes by the name “Effective Altruism”.  It’s founding question: If one wants to accomplish the most good in the world, what, precisely, should one be doing?  
You won’t be surprised that there is a fair amount of disagreement on the answer.  But what might surprise you is how disagreement about the fundamental normative questions involved (regardless of the empirical uncertainties) leads to dramatically different recommendations for action.    
A first key topic is animals.  Should our concern about human suffering be traded off against animal suffering? Perhaps weighted by neural mass?  Are we responsible for just the animals we farm, or the untold number suffering in the wild?  Given Nature’s fearsome indifference, is the average animal life even worth living?  Counterintuitive results abound, like the argument that we should eat more meat because animal farming actually displaces much more wild animal suffering than it creates.
Putting animals aside, we will still need to balance “suffering averted”  with “flourishing created”.  How many malaria deaths will we allow to preserve a Rembrandt?  Very, very bad futures controlled by totalitarian regimes are conceivable; should we play it safe and blow up the sun?
But the accounting for future people leads to some of the most arresting ideas.  Should we care about people any less just because they will live in the far future?  If their existence is contingent on our action, is it bad for them to not exist?  Here, we stumble on deep issues in population ethics.  Legendary Oxford philosopher Derek Parfit formulated the argument of the ”repugnant conclusion”.  It casts doubt on the idea that a billion rich, wealthy people living sustainably for millennia on Earth would be as ideal as you might initially think. 
(Incidentally, the aim of such arguments is not to convince you of some axiomatic position that you find implausible on its face, e.g. “We should maximize the number of people who are born”.  Rather, the idea is to show you that your own already-existing beliefs about the badness of letting people needlessly suffer will probably compel you to act differently, if only you reflect carefully on it.)
The most extreme end of this reasoning brings us back to Bostrom, who points out that we find ourselves at a pivotal time in history. Excepting the last century, humans have existed for a million years without the ability to cause our own extinction.  In probably a few hundred years—or undoubtedly in a few thousand—we will have the ability to create sustainable settlements on other worlds, greatly decreasing the chance that a calamity could wipe us out. In this cosmologically narrow time window we could conceivably extinguish our potentially intergalactic civilization through nuclear holocaust or other new technologies.  Even tiny, well-understood risks like asteroid and comet strikes (probability of extinction event: ~10^{-7} per century) become seriously compelling when the value of the future is brought to bear. Indeed, between 10^{35} and 10^{58} future human lives hang in the balance, so it’s worth thinking hard about.
So why are you on Facebook when you could be working on Wall Street and donating all your salary to avert disaster? Convincingly dodging this argument is harder than you might guess.  And there are quite a number of smart people who bite the bullet.

4 Pages

Walk up to a physicist at a party (we could add a conditional about the amount of beer consumed by the physicist at this point, but that would be redundant, it is a party after all), and say to him or her “4 pages.”  I’ll bet you that 99 percent of the time the physicist’s immediate response will be the three words “Physical Review Letters.”  PRL, a journal of the American Physical Society, is one of the top journals to publish in as a physicist, signaling to the mating masses whether you are OK and qualified to be hired as faculty at (insert your college name here).  I jest!  (As an aside, am I the only one who reads what APS stands for and wonders why I have to see the doctor to try out for high school tennis?)  In my past life, before I passed away as Pontiff, I was quite proud of the PRLs I’d been lucky enough to have helped with, including one that has some cool integrals, and another that welcomes my niece into the world.
Wait, wht?!?  Yes, in “Coherence-Preserving Quantum Bits” the acknowledgement include a reference to my brother’s newborn daughter.  Certainly I know of no other paper where such acknowledgements to a beloved family member is given.  The other interesting bit about that paper is that we (okay probably you can mostly blame me) originally entitled it “Supercoherent Quantum Bits.”  PRL, however, has a policy about new words coined by authors, and, while we almost made it to the end without the referee or editor noticing, they made us change the title because “Supercoherent Quantum Bits” would be a new word.  Who would have thought that being a PRL editor meant you had to be a defender of the lexicon?  (Good thing Ben didn’t include qubits in his title.)
Which brings me to the subject of this post.  This is a cool paper.  It shows that a very nice quantum error correcting code due to Bravyi and Haah admits a transversal (all at once now, comrades!) controlled-controlled-phase gate, and that this, combined with another transversal gate (everyone’s fav the Hadamard) and fault-tolerant quantum error correction is universal for quantum computation.  This shows a way to not have to use state distillation for quantum error correction to perform fault-tolerant quantum computing, which is exciting for those of us who hope to push the quantum computing threshold through the roof with resources available to even a third world quantum computing company.
What does this have to do with PRL?  Well this paper has four pages.  I don’t know if it is going to be submitted or has already been accepted at PRL, but it has that marker that sets off my PRL radar, bing bing bing!  And now here is an interesting thing I found in this paper.  The awesome amazing very cool code in this paper  is defined via its stabilizer


This takes up a whopping 4 lines of the article.  Whereas the disclaimer, in the acknowledgements reads

The U.S. Government is authorized to
reproduce and distribute reprints for Governmental pur-
poses notwithstanding any copyright annotation thereon.
Disclaimer: The views and conclusions contained herein
are those of the authors and should not be interpreted
as necessarily representing the official policies or endorse-
ments, either expressed or implied, of IARPA, DoI/NBC,
or the U.S. Government.

Now I’m not some come-of-age tea party enthusiast who yells at the government like a coyote howls at the moon (I went to Berkeley damnit, as did my parents before me.)  But really, have we come to a point where the god-damn disclaimer on an important paper is longer than the actual definition of the code that makes the paper so amazing?
Before I became a ghost pontiff, I had to raise money from many different three, four, and five letter agencies.  I’ve got nothing but respect for the people who worked the jobs that help supply funding for large research areas like quantum computing.  In fact I personally think we probably need even more people to execute on the civic duty of getting funding to the most interesting and most trans-form-ative long and short term research projects. But really?  A disclaimer longer than the code which the paper is about?  Disclaiming, what exactly?  Erghhh.

Apocalypses, Firewalls, and Boltzmann Brains

Last week’s plebeian scare-mongering about the world ending at the wraparound of the Mayan calendar did not distract sophisticated readers of gr-qc and quant-ph from a more arcane problem, the so-called Firewall Question.  This concerns what happens to Alice when she falls through the event horizon of a large, mature black hole.  Until recently it was thought that nothing special would happen to her other than losing her ability to communicate with the outside world, regardless of whether the black hole was old or young, provided it was large enough for space to be nearly flat at the horizon.  But lately  Almheiri, Marlof, Polchinski, and Sully argued (see also Preskill’s Quantum Frontiers post and especially the comments on it) that she instead would be vaporized instantly and painlessly as she crossed the horizon.  From Alice’s point of view, hitting the firewall would be like dying in her sleep: her experience would simply end.  Alice’s friends wouldn’t notice the firewall either, since they would either be outside the horizon where they couldn’t see her, or inside and also vaporized. So the firewall question, aside from being central to harmonizing no-cloning with black hole complementarity, has a delicious epistemological ambiguity.
Notwithstanding these conceptual attractions, firewalls are not a pressing practical problem, because the universe is far too young to contain any of the kind of black holes expected to have them (large black holes that have evaporated more than half their total mass).
A more worrisome kind of instant destruction, both practically and theoretically, is the possibility that the observable universe—the portion of the universe accessible to us—may be in a metastable state, and might decay catastrophically to a more stable ground state.    Once nucleated, either spontaneously or through some ill-advised human activity,  such a vacuum phase transition would propagate at the speed of light, annihilating the universe we know before we could realize—our universe would die in its sleep.  Most scientists, even cosmologists, don’t worry much about this either, because our universe has been around so long that spontaneous nucleation appears less of a threat than other more localized disasters, such as a nearby supernova or collision with an asteroid.  When some people, following the precautionary principle,  tried to stop a proposed high-energy physics experiment at Brookhaven Lab because it might nucleate a vacuum phase transition or some other world-destroying disaster, prominent scientists argued that if so, naturally occurring cosmic-ray collisions would already have triggered the disaster long ago.  They prevailed, the experiment was done, and nothing bad happened.
The confidence of most scientists, and laypeople, in the stability of the universe rests on gut-level inductive reasoning: the universe contains ample evidence (fossils, the cosmic microwave background, etc.) of having been around for a long time, and it hasn’t disappeared lately.  Even my four year old granddaughter understands this.  When she heard that some people thought the world would end on Dec 21, 2012, she said, “That’s silly.  The world isn’t going to end.”
The observable universe is full of regularities, both obvious and hidden, that underlie the success of science, the human activity which the New York Times rightly called the best idea of the second millennium.  Several months ago in this blog, in an effort to formalize the kind of organized complexity which science studies, I argued that a structure should be considered complex, or logically deep, to the extent that it contains internal evidence of a complicated causal history, one that would take a long time for a universal computer to simulate starting from an algorithmically random input.
Besides making science possible, the observable universe’s regularities give each of us our notion of “us”, of being one of several billion similar beings, instead of the universe’s sole sentient inhabitant.  An extreme form of that lonely alternative, called a Boltzmann brain, is a hypothetical fluctuation arising within a large universe at thermal equilibrium, or in some other highly chaotic state,  the fluctuation being just large enough to support a single momentarily functioning human brain, with illusory perceptions of an orderly outside world, memories of things that never happened, and expectations of a future that would never happen, because the brain would be quickly destroyed by the onslaught of its hostile actual environment.  Most people don’t believe they are Boltzmann brains because in practice science works.   If a Boltzmann brain observer lived long enough to explore some part of its environment not prerequisite to its own existence, it would find chaos there, not order, and yet we generally find order.
Over the last several decades, while minding their own business and applying the scientific method in a routine way, cosmologists stumbled into an uncomfortable situation: the otherwise successful theory of eternal inflation seemed to imply that tiny Boltzmann brain universes were more probable than big, real universes containing galaxies, stars, and people.  More precisely, in these models, the observable universe is part of an infinite seething multiverse, within which real and fake universes each appear infinitely often, with no evident way of defining their relative probabilities—the so-called “measure problem”.
Cosmologists Rafael Bousso and Leonard Susskind and Yasunori Nomura (cf also a later paper) recently proposed a quantum solution to the measure problem, treating the inflationary multiverse as a superposition of terms, one for each universe, including all the real and fake universes that look more or less like ours, and many others whose physics is so different that nothing of interest happens there.  Sean Carroll comments accessibly and with cautious approval on these and related attempts to identify the multiverse of inflation with that of many-worlds quantum mechanics.
Aside from the measure problem and the nature of the multiverse, it seems to me that in order to understand why the observed universe is complicated and orderly, we need to better characterize what a sentient observer is.  For example, can there be a sentient observer who/which is not complex in the sense of logical depth?  A Boltzmann brain would at first appear to be an example of this, because though (briefly) sentient it has by definition not had a long causal history.  It is nevertheless logically deep, because despite its  short actual history it has the same microanatomy as a real brain, which (most plausibly) has had a long causal history.   The Boltzmann brain’s  evidence of having had long history is thus deceptive, like the spurious evidence of meaning the protagonists in Borges’ Library of Babel find by sifting through mountains of chaotic books, until they find one with a few meaningful lines.
I am grateful to John Preskill and especially Alejandro Jenkins for helping me correct and improve early versions of this post, but of course take full responsibility for the errors and misconceptions it may yet contain.

The Nine Circles of LaTeX Hell

This guy had an overfull hbox
This guy had an overfull hbox
Poorly written LaTeX is like a rash. No, you won’t die from it, but it needlessly complicates your life and makes it difficult to focus on pertinent matters. The victims of this unfortunate blight can be both the readers, as in the case of bad typography, or yourself and your coauthors, as in the case of bad coding practice.
Today, in an effort to combat this particular scourge (and in keeping with the theme of this blog’s title), I will be your Virgil on a tour through the nine circles of LaTeX hell. My intention is not to shock or frighten you, dear Reader. I hope, like Dante before me, to promote a more virtuous lifestyle by displaying the wages of sin. However, unlike Dante I will refrain from pointing out all the famous people that reside in these various levels of hell. (I’m guessing Dante already had tenure when he wrote The Inferno.)
The infractions will be minor at first, becoming progressively more heinous, until we reach utter depravity at the ninth level. Let us now descend the steep and savage path.

1) Using {\it ...} and {\bf ...}, etc.

Admittedly, this point is a quibble at best, but let me tell you why to use \textit{...} and \textbf{...} instead. First, \it and \bf don’t perform correctly under composition (in fact, they reset all font attributes), so {\it {\bf ...}} does not produce bold italics as you might expect. Second, it fails to correct the spacing for italicized letters. Compare \text{{\it test}text} to \text{\textit{test}text} and notice how crowded the former is.

2) Using \def

\def is a plain TeX command that writes macros without first checking if there was a preexisting macro. Hence it will overwrite something without producing an error message. This one can be dangerous if you have coauthors: maybe you use \E to mean \mathbb{E}, while your coauthor uses it to mean \mathcal{E}. If you are writing different sections of the paper, then you might introduce some very confusing typos. Use \newcommand or \renewcommand instead.

3) Using $$...$$

This one is another plain TeX command. It messes up vertical spacing within formulas, making them inconsistent, and it causes fleqn to stop working. Moreover, it is syntactically harder to parse since you can’t detect an unmatched pair as easily. Using [...] avoids these issues.

4) Using the eqnarray environment

If you don’t believe me that eqnarray is bad news, then ask Lars Madson, the author of “Avoid eqnarray!”, a 10 page treatise on the subject. It handles spacing in an inconsistent manner and will overwrite the equation numbers for long equations. You should use the amsmath package with the align environment instead.
Now we begin reaching the inner circles of LaTeX hell, where the crimes become noticeably more sinister.

5) Using standard size parentheses around display size fractions

Consider the following abomination: (\displaystyle \frac{1+x}{2})^n (\frac{x^k}{1+x^2})^m = (\int_{-\infty}^{\infty} \mathrm{e}^{-u^2}\mathrm{d}u )^2.
Go on, stare at this for one minute and see if you don’t want to tear your eyes out. Now you know how your reader feels when you are too lazy to use \left and \right.

6) Not using bibtex

Manually writing your bibliography makes it more likely that you will make a mistake and adds a huge unnecessary workload to yourself and your coauthors. If you don’t already use bibtex, then make the switch today.

7) Using text in math mode

Writing H_{effective} is horrendous, but even H_{eff} is an affront. The broken ligature makes these examples particularly bad. There are lots of ways to avoid this, like using \text or \mathrm, which lead to the much more elegant and legible H_{\text{eff}}. Don’t use \mbox, though, because it doesn’t get the font size right: H_{\mbox{eff}}.

8 ) Using a greater-than sign for a ket

At this level of hell in Dante’s Inferno, some of the accursed are being whipped by demons for all eternity. This seems to be about the right level of punishment for people who use the obscenity |\psi>.

9) Not using TeX or LaTeX

This one is so bad, it tops Scott’s list of signs that a claimed mathematical breakthrough is wrong. If you are typing up your results in Microsoft Word using Comic Sans font, then perhaps you should be filling out TPS reports instead of writing scientific papers.

More cracks in the theory of relativity?

When the OPERA collaboration announced their result that they had observed neutrinos traveling faster than the speed of light, it rocked the entire physics community. However, despite the high statistical certainty of the claim, any sober physicist knew that the possibility of systematic errors means that we must patiently wait for additional independent experiments. Einstein’s theory hasn’t been overthrown yet!

Or has it?

Enter the good folks at Conservapedia, a “conservative, family-friendly Wiki encyclopedia.” They have helpfully compiled a list of 39 counterexamples to relativity, and noted that “any one of them shows that the theory of relativity is incorrect.” In fact, relativity “is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world.” That is already damning evidence, but you really must look at the list.

A few of them actually have some partial grounding in reality. For example,

6. Spiral galaxies confound relativity, and unseen “dark matter” has been invented to try to retrofit observations to the theory.

Most of them, however, are either factually challenged or irrelevant:

14. The action-at-a-distance by Jesus, described in John 4:46-54, Matthew 15:28, and Matthew 27:51.

18. The inability of the theory of relativity to lead to other insights, contrary to every extant verified theory of physics.

Why are these scientists at OPERA wasting tax payer’s money on their silly experiments when they can just check this list? And to Bill O’Reilly and Rush Limbaugh: please post your predictions for the LHC to the arXiv soon, before all the data gets analyzed.

Update from Aram: Ironically, conservativepedians don’t like Einstein’s relativity because of its occasional use as a rhetorical flourish in support of cultural relativism. (I agree that using it in this manner constitutes bad writing, and a terribly mixed metaphor.) But by denouncing relativity as a liberal conspiracy along with evolution and global warming, they’ve demonstrated their own form of intellectual relativism: the idea that there is no objective truth, but that we are all entitled to believe whatever facts about the world we prefer. At the risk of improving the credibility of Conservapedia, I made this point on their talk page. Let’s see how long it lasts.

What are the odds?

Let’s multiply together a bunch of numbers which are less than one and see how small they get!
If that sounds like fun, then you’ll love this sleek new infographic (of which the above is just the teaser) posted this morning at BoingBoing. The graphic is based on this blog post by Dr. Ali Binazir, who apparently has an AB (same as a BA) from Harvard, an MD from the UC San Diego School of Medicine, and an M.Phil. from Cambridge.
I’ll save you the effort of clicking through: the good doctor estimates the probability of “your existing as you, today”. His estimate consists of (what else?) multiplying a bunch of raw probability estimates together without conditioning! And I’ll give you a hint as to the conclusion: the odds that you exist are basically zero! Astounding.
I should add that it seems he was forced to add a disclaimer that “It’s all an exercise to get you thinking…” and (obliquely) admit that the calculation is bogus at the end of the post, however.
Is there any branch of mathematics which is abused so extravagantly as probability? I think these sorts of abuses are beyond even the most egregious statistical claims, no?