Where Are the Borg?

(Warning: partially valid arguments ahead, but at some points reality takes a hit and runs for the hills and then returns to make some sort of point.)
What I love about the threshold theorem for computation (classical or quantum) is that it is essentially a theorem about immortality. Whah? Immortality? Indeed. (For those unfamiliar with the ideas of the threshold theorem see a quantum discussion in quant-ph/0507174 by Daniel Gottesman.)
Well first of all, let me rephrase that. The thresholds theorems of computation are about immortality. We should pluralize the “theorem” since there are many different versions of the theorem applicable under many different assumptions. We should pluralize the “threshold” since there are many different parameters which describe the different assumptions.
Now given the assumptions of the thresholds theorems, we can ask the question about whether these assumptions are satisfied in the real world. If they are, then the particular theorem you are concerned with states that it is possible to design a computer whose rate of failing can be made arbitrarily small by building bigger computers out of the faulty components (and this size overhead scales in such a way that changing the rate of failure by k orders of magnitude only incures an overhead of increasing the size by a polynomial in k.) So, in essence, the theorem states that you can make your computer effectively immortal. Say you want it to live for a billion years, then you can build such a device. Say you want it to live for trillion years, then you can build a bigger device. Etc. etc. onward to effectively immortality. (Okay, so there are those of you who might object to me calling a computer a living thing and personifying it with the atributes of life and death, but I have too little time to spend arguing against mythical beasts in the machine for which we have no evidence of and which somehow make biology an independent branch of the laws of the universe 😉 )
So given that the threshold theorems somehow “prove” that we can make immortal machines, the question is obviously whether the universe actually obeys the conditions of one of the threholds theorems. I would certainly be inclined to believe that the answer to this question is that no, there is no thresholds theorem which actually holds in our universe. The threshold is zero. Why do I say this? Well, let’s just think of the most common forms of the quantum therhold theorems. One thing that these models don’t consider is a form of error in which the entire quantum computer is blown up by, to put it in a modern context, terrorists (you see, it all makes sense, because quantum computers can be used to hack the codes that these terrorists use to plot their evil deads. To misquote a famous 19th century author: A useless consistency is the hobgoblin of a creative but bored mind.) Now this form of error can certainly happen. There is certainly a probability that it will happen (at which point we might begin to worry whether it was a Republican or Democrat who calculated this probability.) And I am equally certain that the current threshold theorems do not apply to this form of error. Thus I can at least argue that today’s theorems do not have assumptions which are satisfied in the real world.
Of course the lack of a current theorem which is not satisfied by how the real world works, does not imply that there isn’t some thresholds theorem which is satisfied in the real world. So can we put our arguments on more rigorous (snicker) grounds? Well I would maintain that the lack of the Borg is quiet evidence that there is no threshold theorem for immortality in our universe. Huh? Well suppose that we try to extend our threshold theorem for quantum computation to the type of errors I described above (so-called “t-errors.”) Certainly I can imagine a way to do this (okay maybe not so realistic!) but at the cost of designing a large computer. Indeed I suspect that there are turtles all the way up, and that if we keep pressing higher into the heirarchy of errors we wish to be robust against, we will always be making larger and large computers. And certainly even with our current constructions to truely obtain immortality, we need larger and larger machines. This argument suggests that if there is such a theorem, then to achieve immortality we must construct larger and larger computers whose size, eventually must engulf the entire universe (okay I’m way out on a limb here, but I am currently in California where this is only a little more flakey than your average citizen’s view of the world.) So, since when I look around I do not see such a construction, and I believe that (in this case alien) technology will always expand to fill the void of what is possible (over the edge 😉 ) this implies to me that there the threshold for computation in the universe is zero. Of course I have discounted the possibility that just because I do not see the construction, that the construction does not exist. Indeed we ourselves may be this construction (quoteth Philip K. Dick “the Empire never ended.”) So, no Borg, no threshold. 🙂
Now you might think that believing the thresholds for computation are zero might lead me to choose another field than quantum computation. In fact you might even go so far as to say that maybe we should trash the classical computer revolution, since certainly, there are no fault-tolerant classical computers. But of course, this, like my argument above, is absurd. The thresholds theorems are meant to only be a step in the direction of establishing the possibility of a technology whose use and capacities are not infinite, but are indeed only designed to achieve as much as is possible given the assumptions of the theorems. The thresholds theorems is never about taking the limit of the theorems, by nailing our probability of failure to zero. The thresholds theorems is always about figuring out what you can do with the resources you have. Thus we shouldn’t view the thresholds theorems as a magic potion on the path towards building a quantum computer, but instead as a way to most optimize our construction of a computer.
More importantly for the field of quantum computation, the question of relevance is whether large quantum computers can be build which outperform classical computers. But this always has the context of what classical computer we are talking about. So really the thresholds theorems for quantum computation are more about whether we will be able to build a quantum computer which outperforms a classical computer. Now because we believe that quantum computers have exponentially benefits over classical computers for some tasks, this means that for these tasks, once you get a modern technology where quantum computers outperform classical computers, for the relevant task, building better classical computers becomes an exponential waste over building a better quantum computer. For me, this is the real threshold for quantum computation: the day we cross over from classical computers to quantum computers which outperform these classical computers. The thresholds theorems are just ways of stating that we don’t see any theoretical obstacles towards such a day.

18 Replies to “Where Are the Borg?”

  1. I like the basic argument, though of course “immortal” is limited to the lifetime of the Universe under the best of circumstances (at least, as far as we know).
    _The Year’s Best Science Fiction_, a few years ago (1999?) had several fantastic, dazzling stories on immortality, one of which was by Ursula K. leGuin, but I’ve forgotten the name of that one and confess I don’t remember the others.
    In the real world, of course, as you hint with your terrorist arguments, data is not immortal. For evidence, see Alexandria, the Spanish destruction of all but four known books written by the Maya, the Bukhara library (1920, see _A Peace to End All Peace_), and more recently the Mariner data tapes, which reportedly are unreadable, and BIND, the Biomolecular Interaction Database, which the Canadian government simply refused to continue funding and may be irretrievably shut down (though it appears to be limping along at the moment). Not to mention the continuing decay of archeological sites, cultures, and human languages worldwide; the loss of such things is final and irreversible in the most tragic sense.
    “Data stewardship” is an important discipline in its own right. The intersection of Big Science, supercomputing, and digital libraries often results in talk about this topic; see Reagan Moore (SDSC) or Jack Cole (ARL).
    I should blog this myself…

  2. I had a conversation with Oscar Boykin last year in which he asserted the opposite: that the finite number of particles in the universe limits any fault-tolerant scheme from maintaining error-free information all the way to the infinite time horizon.

  3. I think Aram may be suggesting the right metaphor: can a “t-error” stamp out life in the universe? This discussion reminds me of Dyson’s analysis of immortality in an open universe that expands forwever [Rev. Mod. Phys. 51, 447 (1979)]. He concluded that life can continue forever — it has to slow down as the universe cools, but the subjective time that can be experienced is unbounded. It seems that the conclusion would be more pessimistic, though, if the universe has a positive cosmological constant and approaches a nonzero limiting (Gibbons-Hawking) temperature.

  4. Life on Earth seems to have solved the fault-tolerant problem pretty well; even if we cause some global crisis, the planet will bounce back much faster than it took for life to evolve in the first place.
    And FT computing isn’t about preserving static data forever, it’s about moving some computation forward in the presence of noise; likewise for evolution. In fact, evolution is the optimal solution to a much harder problem than computers (or the Borg) try to solve. I think the reason we don’t see the Borg is that (light-cone arguments aside), survival and self-replication are not things that lend themselves well to being simultaneously optimized on different scales (of time/space/species).

  5. Anonymous: I like the argument, but even if there is a finite number of particles, this doesn’t mean that the Hilbert space is finite, right? Of course one could make arguments along the lines of those made by Seth Lloyd and others about the size of the Hilbert space describing our universe. Either way, either limit (time or space) seem to me to be a little, well fun to think about, but maybe not so relevant to the real world 😉

  6. From the emerging engineering viewpoint on quantum physics, pretty much all the quantum systems that physicists discuss (including quantum computers) are dynamically compressible. That is, they exist within a Hilbert space that is exponentially large, such that generic states cannot even be written explicitly (e.g., the general state of a 40 qubit computer), but fortunately, a basis set can be found within which the Hamiltonian (2^80 numbers) is so sparse that it can be written in a compressed format.
    If you think about it, the Schroedinger equation, for example, is just such a compressed representation of a Hamiltonian.
    A key question for practical quantum computation then is, are the Kraus generators associated with physical noise mechanisms similarly compressible? In a 40 qubit computer, there are ~2^80 such independent Kraus noise generators. Since no known quantum error correction scheme can handle more than a exponentially small fraction of them, most of these noise mechanisms must be zero (or exponentially near zero) for quantum computing to be practical.
    Engineers become very apprehensive when device designs require that a set of ~2^80 noise coefficients all be near-zero, when there is no apparent physical law that requires this zeroing. E.g., in all the published device designs that I have seen, it is dismayingly easy to identify physical noise mechanisms that couple large numbers of qubits.
    Classical computers robustly correct these errors by coupling each bit to a thermal bath — this design solution is not open to quantum computers. Are we engineers missing something here?

  7. What about if the quantum computer is causally disconnected from the rest of the universe? Surely then the t-errors and all other external sources of error are completely irrelevant, and you can work out a rigorous threshold theorem.

  8. Classical (probabilistic) computation has exactly the same counting arguments as quantum computation, except for a factor of two.

    Hmmmm … that’s not my experience when I code spin simulations! Clearly, classical simulations have a state space of dimension 2 n_{spin}, while quantum simulations have a state space of 2^n_{spin}.
    Because the quantum state space is exponentially larger than the classical state space, the space of dynamical couplings and noise mechanisms is exponentially larger too.
    It’s true that almost all of these new dynamical couplings and noise mechanisms are non-classical. Doesn’t that suggest, though, that an early—and very important—physics application of quantum computers will be to measure these never-before-accessible high-order quantum noise mechanisms?

  9. Classical deterministic simulations have dimension the number of spins. Classical probablistic simulations have dimension 2^(number of spins). Keeping track of probabilities has the same difficulties that keeping track of amplitudes has.

    Hmmmm, so if I take my existing “unravelling” simulations of quantum state evolution (which numerically track the 2^n variables of each individual state trajectory), and I ambitiously decide to evaluate the set of all such trajectories, in order to globally characterize the flow of the system’s “quantum fluid”, then doesn’t the new all-trajectory quantum simulation require (2^n) more variables than an otherwise simjlar all-trajectory classical simulation?
    We can choose to track individual trajectories (known as “particle trajectories” or “state unravellings”, respectively), or we can try for a (much harder) global description in which we evaluate *all* trajectories. Either way, the quantum description seems harder, purely for dimensional reasons.
    I’m just suggesting that quantum mechanics is a lot like fluid mechanics, except that the state space is much bigger (and has some additional invariances, e.g., Choi’s theorem). I mean, there must be *something* that’s harder about quantum mechanics, right?

  10. I would argue with your (Sidles) characterization of classical computers. Classical (probabilistic) computation has exactly the same counting arguments as quantum computation, except for a factor of two. Further I do believe that coupling bits to a thermal bath is allowed, indeed necesary, for quantum computers. This, in fact, is exactly how quantum error correction works (dumping entropy into the surroudnigns.)

  11. What about if the quantum computer is causally disconnected from the rest of the universe? Surely then the t-errors and all other external sources of error are completely irrelevant, and you can work out a rigorous threshold theorem.
    Indeed, but I have yet to discover a method for removing an experiment from any section of the universe but the whole universe itself.

  12. I don’t think the “t-errors” are likely to be a limiting factor: by separating the components of the computer sufficiently widely, you should be able to avoid that sort of catastrophic error. Also, Borg absence does not tell you very much, since the size scaling of a fault-tolerant computer need only be polylogarithmic in the length of time it must run. Perhaps the universe is simply not old enough for us to have encountered the Borg yet.
    However, there is another assumption of the threshold theorems that is not satisfied in the actual universe: We have no on-line source of clean bits to dump entropy into. The universe is gradually evolving towards a state of maximal entropy, and no amount of fault-tolerance can reverse that overall trend. At least, it can’t if physics is unitary. Perhaps we could eliminate entropy using white hole anti-evaporation – the time reverse of black hole evaporation, in which we push together thermal radiation and get a white hole which spits out a variety of pure states.

  13. Daniel’s comment reminds me of the following line of speculation:
    1. Hawking believes (believed) that quantum theory needed to be modified due to the black hole information paradox.
    2. Modifications of quantum theory almost inevitably make quantum theory computationally stronger.
    3. Thus it must be possible to solve computationally hard problems by throwing parts of your computer into a black hole.
    Yeah, the logic is in shambles, but this shouldn’t stop a science fiction writer from using the above mechanism in their novel 🙂

  14. Clearly, classical simulations have a state space of dimension 2 n_{spin}, while quantum simulations have a state space of 2^n_{spin}.
    Classical deterministic simulations have dimension the number of spins. Classical probablistic simualtions have dimension 2^(number of spins). Keeping track of probabilities has the same difficulties that keeping track of amplitudes has.

  15. So, since when I look around I do not see such a construction, and I believe that (in this case alien) technology will always expand to fill the void of what is possible (over the edge ) this implies to me that there the threshold for computation in the universe is zero.
    I’m thinking this borg stuff is mostly in jest, but still, isn’t this just a new spin on the Fermi paradox? Even if it isn’t possible to build an “immortal” computer, you’d think a sufficiently advanced alien civilization would still want to convert large amounts of matter into computers just to be able to create giant simulations or superintelligent jupiter-brains or whatever, or to do something else that would leave visible evidence of their presence, like converting stars to dyson spheres or sending out self-replicating von neumann probes to colonize every solar system in the galaxy. Personally I take this argument seriously enough that I’d bet a lot that the Rare Earth argument is basically correct and no intelligent life has evolved anywhere within our past light cone…

  16. Sure when you simulate a classical probabilistic system by using probabilistic algorithms, then there is no counting problem (as you do in “trajectories” simulations.) Similarly if we simulate a quantum system using a quantum algorithm, then there is no counting problem!
    But if you look at the space used to describe a probablistic classical system (that’s 2^n-1 probabilities) versus a quantum system (that’s 2^n-2 complex amplitudes) then there’s not much of a difference.
    But back to the main argument which you give, which has to do with noise models… How do we describe noisy classical systems. Well the general description of a classical n bit system has 2^n probabilities and a general description of the evolution is a stochastic 2^n by 2^n matrix. So even in the classical world you need to worry about the values of an exponential number of these coefficients. Now you don’t need to worry about them being zero but instead need to understand how they vary across your system. One would hope, for both quantum and classical, that the noise does not exhibit correlations which will destroy computation. But note that this applies to both classical AND quantum computing. Do you really know that there aren’t memory effects which ultimately limit classical computation? Most definitely no.

  17. This debate is in a very old place — looking at finite sets of things that can supposedly make up a universe, and asking what it is like to be human in such a universe. I guess that the first person to try this kind of reasoning was Fredrich Nietzsche, in his last days, trying to give proofs of the eternal return (http://www.math.umd.edu/~lvrmr/History/Recurrence.html) that subsequently become generalized. The problem is that in relativized quantum theories, there is “no place for particles” (see http://philsci-archive.pitt.edu/archive/00000195/, one of the best 10 papers in philosophy of that year, so say some), so the arguments don’t really go through. I confess I understand the simpler threshold theorems, but the field stuff that Kitaev, Freedman, & so forth talk about are a bit beyond me. I suspect, though, that what they are saying is ontologically compatible with the axiomatic presentations of the field theory.

Leave a Reply

Your email address will not be published. Required fields are marked *