Happy New Year! To celebrate let’s talk about error correcting codes and….aliens.
The universe, as many have noted, is kind of like a computer. Or at least our best description of the universe is given by equations that prescribe how various quantities change in time, a lot like a computer program describes how data in a computer changes in time. Of course, this ignores one of the fundamental differences between our universe and our computers: our computers tend to be able to persist information over long periods of time. In contrast, the quantities describing our universe tend to have a hard time being recoverable after even a short amount of time: the location (wave function) of an atom, unless carefully controlled, is impacted by an environment that quickly makes it impossible to recover the initial bits (qubits) of the location of the atom.
Computers, then, are special objects in our universe, ones that persist and allow manipulation of long lived bits of information. A lot like life. The bits describing me, the structural stuff of my bones, skin, and muscles, the more concretely information theoretic bits of my grumbly personality and memories, the DNA describing how to build a new version of me, are all pieces of information that persist over what we call a lifetime, despite the constant gnaw of second law armed foes that would transform me into unliving goo. Maintaining my bits in the face of phase transitions, viruses, bowel obstructions, cardiac arrest, car accidents, and bears is what human life is all about, and we increasingly do it well, with life expectancy now approaching 80 years in many parts of the world.
But 80 years is not that long. Our universe is 13.8ish billion years old, or about 170ish million current lucky human’s life expectancies. Most of us would, all things equal, like to live even longer. We’re not big fans of death. So what obstacles are there toward life extension? Yadda yadda biology squishy stuff, yes. Not qualified to go there so I won’t. But since life is like a computer in regards to maintaining information, we can look toward our understanding of what allows computers to preserve information…and extrapolate!
Enter error correction. If bits are subject to processes that flip the values of the bits, then you’ll lose information. If, however, we give up storing information in each individual bit and instead store single bits across multiple individual noisy bits, we can make this spread out bit live much longer. Instead of saying 0, and watching it decay to unknown value, say 000…00, 0 many times, and ask if the majority of these values remain 0. Viola you’ve got an error correcting code. Your smeared out information will be preserved longer, but, and here is the important point, at the cost of using more bits.
Formalizing this a bit, there are a class of beautiful theorems, due originally to von Neumann, classically, and a host of others, quantumly, called the threshold theorems for fault-tolerant computing which tell you, given an model for how errors occur, how fast they occur, and how fast you can compute, whether you can reliably compute. Roughly these theorems all say something like: if your error rate is below some threshold, then if you want to compute while failing at a particular better rate, then you can do this using a complicated larger construction that is larger proportional to a polynomial in the log of inverse of the error rate you wish to achieve. What I’d like to pull out of these theorems for talking about life are two things: 1) there is an overhead to reliably compute, this overhead is both larger, in size, and takes longer, in time, to compute and 2) the scaling of this overhead depends crucially on the error model assumed.
Which leads back to life. If it is a crucial part of life to preserve information, to keep your bits moving down the timeline, then it seems that the threshold theorems will have something, ultimately, to say about extending your lifespan. What are the error models and what are the fundamental error rates of current human life? Is there a threshold theorem for life? I’m not sure we understand biology well enough to pin this down yet, but I do believe we can use the above discussion to extrapolate about our future evolution.
Or, because witnessing evolution of humans out of their present state seemingly requires waiting a really long time, or technology we currently don’t have, let’s apply this to…aliens. 13.8 billion years is a long time. It now looks like there are lots of planets. If intelligent life arose on those planets billions of years ago, then it is likely that it has also had billions of years to evolve past the level of intelligence that infects our current human era. Which is to say it seems like any such hypothetical aliens would have had time to push the boundaries of the threshold theorem for life. They would have manipulated and technologically engineered themselves into beings that live for a long period of time. They would have applied the constructions of the threshold theorem for life to themselves, lengthening their life by apply principles of fault-tolerant computing.
As we’ve witnessed over the last century, intelligent life seems to hit a point in its life where rapid technological change occurs. Supposing that the period of time in which life spends going from reproducing, not intelligent stuff, to megalords of technological magic in which the life can modify itself and apply the constructions of the threshold theorem for life, is fast, then it seems that most life will be found at the two ends of the spectrum, unthinking goo, or creatures who have climbed the threshold theorem for life to extend their lifespans to extremely long lifetimes. Which lets us think about what alien intelligent life would look like: it will be pushing the boundaries of using the threshold theorem for life.
Which lets us make predictions about what this advanced alien life would look life. First, and probably most importantly, it would be slow. We know that our own biology operates at an error rate that ends up being about 80 years. If we want to extend this further, then taking our guidance from the threshold theorems of computation, we will have to use larger constructions and slower constructions in order to extend this lifetime. And, another important point, we have to do this for each new error model which comes to dominate our death rate. That is, today, cardiac arrest kills the highest percentage of people. This is one error model, so to speak. Once you’ve conquered it, you can go down the line, thinking about error models like earthquakes, falls off cliffs, etc. So, likely, if you want to live a long time, you won’t just be slightly slow compared to our current biological life, but instead extremely slow. And large.
And now you can see my resolution to the Fermi paradox. The Fermi paradox is a fancy way of saying “where are the (intelligent) aliens?” Perhaps we have not found intelligent life because the natural fixed point of intelligent evolution is to produce entities for which our 80 year lifespans is not even a fraction of one of their basic clock cycles. Perhaps we don’t see aliens because, unless you catch life in the short transition from unthinking goo to masters of the universe, the aliens are just operating on too slow a timescale. To discover aliens, we must correlate observations over a long timespan, something we have not yet had the tools and time to do. Even more interesting the threshold theorems also have you spread your information out across a large number of individually erring sub-systems. So not only do you have to look at longer time scales, you also need to make correlated observations over larger and larger systems. Individual bits in error correcting codes look as noisy as before, it is only in the aggregate that information is preserved over longer timespans. So not only do we have too look slower, we need to do so over larger chunks of space. We don’t see aliens, dear Fermi, because we are young and impatient.
And about those error models. Our medical technology is valiantly tackling a long list of human concerns. But those advanced aliens, what kind of error models are they most concerned with? Might I suggest that among those error models, on the long list of things that might not have been fixed by their current setup, the things that end up limiting their error rate, might not we be on that list? So quick, up the ladder of threshold theorems for life, before we end up an error model in some more advanced creatures slow intelligent mind!
Cosmology meets Philanthropy — guest post by Jess Riedel
People sometimes ask me what how my research will help society. This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously. And of course, this is a fair question from the layman; tax dollars support most of our work.
I generally take the attitude of former Fermilab director Robert R. Wilson. During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR. He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”
Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“. As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future. And just about everything we might value is ultimately powered by it. As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.
These resources can probably be put to better use. Bostrom points out that, assuming we don’t destroy ourselves, our descendants likely will one day spread through the universe. Delaying our colonization of the Virgo Supercluster by one second forgoes about $latex 10^{16}$ human life-years. Each year, on average, an entire galaxy—with its billions of stars—is slipping outside of our cosmological event horizon, forever separating it from Earth-originating life. Maybe we should get on with it?
But the careful reader will note that not everyone believes the supply of negentropy is well understood or even necessarily fixed, especially given the open questions in general relativity, cosmology, quantum mechanics, and (recently) black holes. Changes in our understanding of these and other issues could have deep implications for the future. And, as we shall see, for what we do tomorrow.
On the other side of the pond, two young investment analysts at Bridgewater Associates got interested in giving some of their new disposable income to charity. Naturally, they wanted to get something for their investment, and so they went looking for information about what charity would get them the most bang for their buck. But it turned out that not too many people in the philanthropic world seemed to have many good answer. A casual observer would even be forgiven for thinking that nobody really cared about what was actually getting done with the quarter trillion donated annually to charity. And this is no small matter; as measured by just about any metric you choose—lives saved, seals unclubbed, children dewormed—charities vary by many orders of magnitude in efficiency.
This prompted them to start GiveWell, now considered by many esteemed commentators to be the premier charity evaluator. One such commentator is Princeton philosopher Peter Singer, who proposed the famous thought experiment of the drowning child. Singer is also actively involved with a larger movement that these days goes by the name “Effective Altruism”. It’s founding question: If one wants to accomplish the most good in the world, what, precisely, should one be doing?
You won’t be surprised that there is a fair amount of disagreement on the answer. But what might surprise you is how disagreement about the fundamental normative questions involved (regardless of the empirical uncertainties) leads to dramatically different recommendations for action.
A first key topic is animals. Should our concern about human suffering be traded off against animal suffering? Perhaps weighted by neural mass? Are we responsible for just the animals we farm, or the untold number suffering in the wild? Given Nature’s fearsome indifference, is the average animal life even worth living? Counterintuitive results abound, like the argument that we should eat more meat because animal farming actually displaces much more wild animal suffering than it creates.
Putting animals aside, we will still need to balance “suffering averted” with “flourishing created”. How many malaria deaths will we allow to preserve a Rembrandt? Very, very bad futures controlled by totalitarian regimes are conceivable; should we play it safe and blow up the sun?
But the accounting for future people leads to some of the most arresting ideas. Should we care about people any less just because they will live in the far future? If their existence is contingent on our action, is it bad for them to not exist? Here, we stumble on deep issues in population ethics. Legendary Oxford philosopher Derek Parfit formulated the argument of the ”repugnant conclusion”. It casts doubt on the idea that a billion rich, wealthy people living sustainably for millennia on Earth would be as ideal as you might initially think.
(Incidentally, the aim of such arguments is not to convince you of some axiomatic position that you find implausible on its face, e.g. “We should maximize the number of people who are born”. Rather, the idea is to show you that your own already-existing beliefs about the badness of letting people needlessly suffer will probably compel you to act differently, if only you reflect carefully on it.)
The most extreme end of this reasoning brings us back to Bostrom, who points out that we find ourselves at a pivotal time in history. Excepting the last century, humans have existed for a million years without the ability to cause our own extinction. In probably a few hundred years—or undoubtedly in a few thousand—we will have the ability to create sustainable settlements on other worlds, greatly decreasing the chance that a calamity could wipe us out. In this cosmologically narrow time window we could conceivably extinguish our potentially intergalactic civilization through nuclear holocaust or other new technologies. Even tiny, well-understood risks like asteroid and comet strikes (probability of extinction event: ~$latex 10^{-7}$ per century) become seriously compelling when the value of the future is brought to bear. Indeed, between $latex 10^{35}$ and $latex 10^{58}$ future human lives hang in the balance, so it’s worth thinking hard about.
So why are you on Facebook when you could be working on Wall Street and donating all your salary to avert disaster? Convincingly dodging this argument is harder than you might guess. And there are quite a number of smart people who bite the bullet.