Error correcting aliens

Happy New Year!  To celebrate let’s talk about error correcting codes and….aliens.

The universe, as many have noted, is kind of like a computer.  Or at least our best description of the universe is given by equations that prescribe how various quantities change in time, a lot like a computer program describes how data in a computer changes in time.  Of course, this ignores one of the fundamental differences between our universe and our computers: our computers tend to be able to persist information over long periods of time.  In contrast, the quantities describing our universe tend to have a hard time being recoverable after even a short amount of time: the location (wave function) of an atom, unless carefully controlled, is impacted by an environment that quickly makes it impossible to recover the initial bits (qubits) of the location of the atom. 

Computers, then, are special objects in our universe, ones that persist and allow manipulation of long lived bits of information.  A lot like life.  The bits describing me, the structural stuff of my bones, skin, and muscles, the more concretely information theoretic bits of my grumbly personality and memories, the DNA describing how to build a new version of me, are all pieces of information that persist over what we call a lifetime, despite the constant gnaw of second law armed foes that would transform me into unliving goo.  Maintaining my bits in the face of phase transitions, viruses, bowel obstructions, cardiac arrest, car accidents, and bears is what human life is all about, and we increasingly do it well, with life expectancy now approaching 80 years in many parts of the world.

But 80 years is not that long.  Our universe is 13.8ish billion years old, or about 170ish million current lucky human’s life expectancies.  Most of us would, all things equal, like to live even longer.  We’re not big fans of death.  So what obstacles are there toward life extension?  Yadda yadda biology squishy stuff, yes.  Not qualified to go there so I won’t.  But since life is like a computer in regards to maintaining information, we can look toward our understanding of what allows computers to preserve information…and extrapolate!

Enter error correction.  If bits are subject to processes that flip the values of the bits, then you’ll lose information.  If, however, we give up storing information in each individual bit and instead store single bits across multiple individual noisy bits, we can make this spread out bit live much longer.  Instead of saying 0, and watching it decay to unknown value, say 000…00, 0 many times, and ask if the majority of these values remain 0.  Viola you’ve got an error correcting code.  Your smeared out information will be preserved longer, but, and here is the important point, at the cost of using more bits.

Formalizing this a bit, there are a class of beautiful theorems, due originally to von Neumann, classically, and a host of others, quantumly, called the threshold theorems for fault-tolerant computing which tell you, given an model for how errors occur, how fast they occur, and how fast you can compute, whether you can reliably compute. Roughly these theorems all say something like: if your error rate is below some threshold, then if you want to compute while failing at a particular better rate, then you can do this using a complicated larger construction that is larger proportional to a polynomial in the log of inverse of the error rate you wish to achieve. What I’d like to pull out of these theorems for talking about life are two things: 1) there is an overhead to reliably compute, this overhead is both larger, in size, and takes longer, in time, to compute and 2) the scaling of this overhead depends crucially on the error model assumed.

Which leads back to life. If it is a crucial part of life to preserve information, to keep your bits moving down the timeline, then it seems that the threshold theorems will have something, ultimately, to say about extending your lifespan. What are the error models and what are the fundamental error rates of current human life? Is there a threshold theorem for life? I’m not sure we understand biology well enough to pin this down yet, but I do believe we can use the above discussion to extrapolate about our future evolution.

Or, because witnessing evolution of humans out of their present state seemingly requires waiting a really long time, or technology we currently don’t have, let’s apply this to…aliens. 13.8 billion years is a long time. It now looks like there are lots of planets. If intelligent life arose on those planets billions of years ago, then it is likely that it has also had billions of years to evolve past the level of intelligence that infects our current human era. Which is to say it seems like any such hypothetical aliens would have had time to push the boundaries of the threshold theorem for life. They would have manipulated and technologically engineered themselves into beings that live for a long period of time. They would have applied the constructions of the threshold theorem for life to themselves, lengthening their life by apply principles of fault-tolerant computing.

As we’ve witnessed over the last century, intelligent life seems to hit a point in its life where rapid technological change occurs. Supposing that the period of time in which life spends going from reproducing, not intelligent stuff, to megalords of technological magic in which the life can modify itself and apply the constructions of the threshold theorem for life, is fast, then it seems that most life will be found at the two ends of the spectrum, unthinking goo, or creatures who have climbed the threshold theorem for life to extend their lifespans to extremely long lifetimes. Which lets us think about what alien intelligent life would look like: it will be pushing the boundaries of using the threshold theorem for life.

Which lets us make predictions about what this advanced alien life would look life. First, and probably most importantly, it would be slow. We know that our own biology operates at an error rate that ends up being about 80 years. If we want to extend this further, then taking our guidance from the threshold theorems of computation, we will have to use larger constructions and slower constructions in order to extend this lifetime. And, another important point, we have to do this for each new error model which comes to dominate our death rate. That is, today, cardiac arrest kills the highest percentage of people. This is one error model, so to speak. Once you’ve conquered it, you can go down the line, thinking about error models like earthquakes, falls off cliffs, etc. So, likely, if you want to live a long time, you won’t just be slightly slow compared to our current biological life, but instead extremely slow. And large.

And now you can see my resolution to the Fermi paradox. The Fermi paradox is a fancy way of saying “where are the (intelligent) aliens?” Perhaps we have not found intelligent life because the natural fixed point of intelligent evolution is to produce entities for which our 80 year lifespans is not even a fraction of one of their basic clock cycles. Perhaps we don’t see aliens because, unless you catch life in the short transition from unthinking goo to masters of the universe, the aliens are just operating on too slow a timescale. To discover aliens, we must correlate observations over a long timespan, something we have not yet had the tools and time to do. Even more interesting the threshold theorems also have you spread your information out across a large number of individually erring sub-systems. So not only do you have to look at longer time scales, you also need to make correlated observations over larger and larger systems. Individual bits in error correcting codes look as noisy as before, it is only in the aggregate that information is preserved over longer timespans. So not only do we have too look slower, we need to do so over larger chunks of space. We don’t see aliens, dear Fermi, because we are young and impatient.

And about those error models. Our medical technology is valiantly tackling a long list of human concerns. But those advanced aliens, what kind of error models are they most concerned with? Might I suggest that among those error models, on the long list of things that might not have been fixed by their current setup, the things that end up limiting their error rate, might not we be on that list? So quick, up the ladder of threshold theorems for life, before we end up an error model in some more advanced creatures slow intelligent mind!

This entry was posted in Astrobiology, Computer Science, Extralusionary Intelligence, Off The Deep End. Bookmark the permalink.

9 Responses to Error correcting aliens

  1. Carl Lumma says:

    Interesting idea, but the 80 years is not the result of error accumulation. It’s programmed death regulated by kin selection. (Related is the new recommendation against antioxidant supplements.)

    Nor does it seem DNA errors are entirely bad. They appear to be a valuable source of mutation.

    So there’s no reason to think we’re on an efficient frontier given by the threshold theorems, at least not from an individual’s point of view.

    A problem with going slow is that it creates incentives for organisms to go faster and displace you. Faster generally wins. And if it does, relativity limits your effective size.

    Like or Dislike: Thumb up 1 Thumb down 0

  2. David Meyer says:

    Great piece, Dave! It suggests that biologists should be able to identify error-correction/fault tolerant mechanisms, at a range of scales, in non-alien species. At the level of DNA, of course, there is the weak error correction of the several-to-one codon to amino acid mapping. But there should be more, at higher levels, and I expect some are known to people expert in “Yadda yadda biology squishy stuff”!

    Like or Dislike: Thumb up 2 Thumb down 1

  3. wolfgang says:

    Very interesting!

    You suggest that alien intelligence would be large (compared to humans) and operate on long time scales.

    So perhaps Earth as a whole is such an alien …
    Or perhaps the aliens have evolved to a point where they would set up Earth as a computer to calculate something really important to them (like the question about
    life, the universe and all)!

    Like or Dislike: Thumb up 1 Thumb down 0

  4. AlanAlda says:

    I like the article, but I just don’t see why slowness and largeness follow logically from any of the previous points. I think a much more likely possibility in the case of the mentioned earthquakes and falls will be that we modify our environment rather than ourselves, ie. technology comes to the rescue yet again (as arguably it already does in the case of earthquakes in many modern cities).

    Like or Dislike: Thumb up 0 Thumb down 0

  5. I am not quite sure I agree completely.

    Our brain is what we want to primarily preserve for longer times, our bodies are relatively less important. And brains are (in your computer analogy) just a hard disk and a processor. But the processor is pretty similar for all humans. Ultimately we really care more about our hard disk/memories than our brain processor – that might serve to make us better physicists/sportsmen/etc than others. And memories are easy to protect against errors. Simply copy everything in the current hard disk to a newly built hard disk. Errors even with current technology are extremely small in this process.

    Like or Dislike: Thumb up 1 Thumb down 1

  6. Dave Bacon says:

    Sorry for the slow reply to comments. Remember this is the internet, so everything I say should be followed by a great big depth of “this is fun, not serious!”

    Carl makes some interesting points:
    “It’s programmed death regulated by kin selection.” Not sure why that’s not just another error process?

    That DNA mutations are sometimes good (note that I wasn’t arguing that DNA mutations were limiters here, though they might be one error process), is a lot like, on a much longer time scale, the fact that error correction requires access to a fast entropy dump in order to work (to restore errors you need to diagnose them and forget about these errors).

    If faster organism always win we should tell that to the sloths. I also tried to make it clear that I was arguing about what happens to “intelligent” species. I guess I should have been clearer in that my definition of “intelligent” is self aware enough to understand molecular biology. Also note that my argument only means that the overall, encoded evolution of the beast is slow. The individual, error correcting steps necessarily must be fast. So while our large scale information processing may be slow, our error correcting subroutines would likely take care of those speedy cheetahs!

    Your final point, about relativity is interesting. Relativity from the perspective of error correction is most important from establishing a finite speed of propagation of information. But what we know about error correction and fault-tolerance suggest that this isn’t a fundamental barrier to the error correcting constructions. It does, however, tend to make them less efficient. Which argues in favor of this silly idea being more correct than wrong.

    AlanAlda points out that we will likely try to modify the error processes themselves. Definitely! So some of the ways in which we do this, with technology that because in effect part of us, seem like they fall into the spirit of this discussion. It’s hard to think of a technological process that can alter more “fundamental” error rates. The kinds of errors that we see in quantum computing, and even today as our computers reach the single atom scale, are in essence coming nearly straight up from the laws of quantum electrodynamics. Is it fair of me to think of earthquakes as having a similar derivation?

    Abdullah Khalid brings up a point which others have brought up to me. That we should just download our brains to computers and we will all be fine. Okay so I have to admit that prior to my new life as a Google engineer I might agree with you. But working at Google has shown me the error of my ways :) Disks fail. A lot. Hardware fails. A lot. Building fault tolerant distributed systems, even today, requires overhead that is non-trivial. I see this because I see a shit load of computers. But number is equivalent to time. If we start seeing very long timescale life on computers, we will no longer believe as much in the cult of the error free hardware. But eventually we will hit the same sorts of information processing error correcting limits.

    Like or Dislike: Thumb up 0 Thumb down 0

  7. quantumbot says:

    It’s yet another thing we always wanted to know about aliens, but were afraid to ask.

    Like or Dislike: Thumb up 0 Thumb down 1

  8. Sid says:

    If the individual error-correcting steps are necessarily fast, then shouldn’t those be visible to us as something out of the ordinary and hence potentially showing alien life?

    Like or Dislike: Thumb up 0 Thumb down 0

  9. Dave Bacon says:

    Yeah you would think the error correcting steps would start to become visible. Of course since they are essentially shuttling noise that is destructive to places where the noise doesn’t matter, the error correcting aliens may look a lot like noise!

    Like or Dislike: Thumb up 0 Thumb down 0

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>