Moore's Law

Our universe is expanding. Not only that, but this expansion is probably accelerating. Now two authors, Lawrence Krauss and Glenn Starkman, have proposed that a consequence of this acceleration is that in such a universe only a finite amount of information processing can be performed: astro-ph 0404510. This means, according to the authors, that the total amount of information process can be at most 10^(120) bits. A consequence of this is that Moore’s law can last for at most 600 years in any civilization!

4 Replies to “Moore's Law”

  1. The method is slightly different than Lloyd and Bousso. They (Krauss and Starkman) calculate total energy accessible, penalize it for redshifting when you send it back to the orgin where you have a computer, say that the radiation from the horizon is of a certain temperature and therefore calculate the amount of energy you need to compute at that temperature.
    What I’m not quite sure about in this calculation is what the effect of turning the rest mass energy into radiation to send back to the origin does to the cosmology. Basically, I’m arguing that if you are going to “go big” by making claims about the ultimate limits of computation, you’d better make sure that there aren’t loop holes: and a big loophole is that it might be possible to alter the cosmology of the universe. If you can convert dark energy to photons or to mass or vice versa, you can alter the cosmology.

  2. Well, how about forgetting some of the stuff that we already know?
    It is always possible to use lossy classical compression and increase the degree of information storage, but I am not sure of what one gains assymptotically, but I am sure Shanon has an answer for it.
    Science itself is all about compressing information. For example instead of tabulating various refraction angles we can use the law of sines for refraction. From a computer science perspective this is basically data compression: assume that nature repeats her own patterns [I’ll come to it], and to define these patterns and write the law governing it. You’ll find it to be very similar to for example Hoffman coding!
    However the assumption that nature repeats her own patterns is rooted in two extremely shaky grounds [in my humble opinion]: (a) statistics (related ultimately to probability and then of course to set theory and then to logic and then to] (b) human brain as a pattern matching machine.
    There have been some works on quantum lossy compression [lossy super-dense coding, or communication stuff] right?

  3. What this means, presumably, is that anything physically computable is in the “complexity class” BQSPACE(10^122) (I use quotes because this is a finite, model-dependent set). Actually that upper bound is too generous, something I’ve meaning to write a paper on. In any case, it was already known — Seth Lloyd and Raphael Bousso had papers about it, which Andris Ambainis and I discussed in our “Quantum search of spatial regions.” Do Krauss and Starkman add anything new?

Leave a Reply

Your email address will not be published. Required fields are marked *