Things fall apart. Normally we think about our computers as deterministic machines which never make a mistake. But with a very small probability, your hardware will make a mistake (information in a storage device is probably the most likely place where this will occur.) The point, however, is that for the task we need our computers, writing an email, ordering a product from amazon.com, etc, the rate of failure does not come into play.
Now suppose that humanity learns to prolong its lifespan to some enormous timescale. Will this change our fundamental concept of what a computer is? When the errors of a computer factor into your life in a real, albeit slow, way, will we think of computers in the same way we do today?
Computers are not invincible. It is not clear to me that their method of achieving fault-tolerance is even the best or most effective method for computation. When we build our computers small, so small that errors become unescapable, will we continue to try to maintain the model of the transistor and the near deterministic completely controlled system? Or will we take a cue from biology and maybe find that complex erring systems can be programmed in ways we haven’t thought of yet?