Self-Correcting Quantum Computers, Part I

Quantum computing is hair-brained, but then again so is classical probabilistic computing. Part I of my attempt to explain one of my main research interests in quantum computing: “self-correcting quantum computers.”

Quantum Computing, a Harebrained Idea?

Quantum computing, at first sight, sounds like a hairbrained idea with absolutely no possible possibility of actually working in the real world. The reasons for this are plentiful, at least when you first start learning about quantum computers. Quantum states (aka wave functions) are described by a continuum of values. Uh, oh, that sounds a lot like analog computing, a model of computer which, if you really allow infinite precision, you will get you nearly unlimited power. But in the real world analog computers do not gain this power exactly because real world experiments can only access finite precision. Quantum gates are described by unitary matrices. Again these have a continuum of values. Quantum states “collapse” when you look at them. How do you quantum compute if simply looking at your computer destroys its computation?
Now each of these objections is an objection which was raised back in the days of lore, when Richard Feynman and others first proposed the idea of a quantum computer, and when most physicists and computer scientists didn’t even know enough about quantum computing to shun it without learning it. But these objections, when you dig down and analyze them aren’t much different from those of a different type of machine, a classical probabilistic machine.

Setup: a Common Category Error

A system can be (analog or digital) and (deterministic or probabilistic).

Let me explain. First of all it is useful to define a few things. In particular it is useful to define digital and analog, or at least to define what I mean by these words. To me a digital system is a system whose configurations are a finite set. In other words a digital system has configurations which you might label as zero, one, two, etc, up to some final number. Now of course, most digital systems are abstractions. For example the digital information represented by voltages in your computer is digital in the sense that you define a certain voltage range to be a zero and another range to be one. Thus even though the underlying system may not be digital, from the perspective of how you use the computer, being digital is a good approximation. Analog systems, as opposed to digital systems, are the ones whose configurations aren’t drawn from a finite set. Thus, for example, in classical physics, an analog system might be the location of a bug on a line segment. To properly describe where the bug is we need to use the infinite set of real numbers.
Okay, so digital and analog are words which we use to describe the configurations of a physical system (physicists would call these degrees of freedom.) But often a physical system, while it may be digital or analog, also has uncertainty associated with it. Thus it is useful to also make a distinction between systems which are deterministic and those which are probabilistic. Now technically these two ideas really refer to how a systems configurations change with time. But they also influence how we describe the system at a given time. A deterministic system is one in which the configuration at some time is completely determined by the configuration at some previous time. A probabilistic system is one in which the configurations change and, either because we are ignorant of information or for some more fundamental reason, these changes are not known with certainty. For systems which evolve deterministically, we can write down that we know the state of the system. For systems which evolve probabilistically, we aren’t allowed to describe our system in such precise terms, but must associate probabilities to be in particular configurations.

Deterministic bits, probabilistic bits, and quantum bits: a cheat sheet.

Now why am I being so pedantic in describing these words? Well because first of all there is often a confusion between analog systems and probabilistic systems. The first refers to degrees of freedom or configurations. The second refers to how a system changes in time, or what we can predict about it once we make a measurement on it. This category error has caused great confusion.
So what does this have to do with quantum computers? Well it turns out that the set made of deterministic and probabilistic really has another member, and this member is quantum. Quantum theory is the language which describes how systems evolve in time in our universe. Thus we are led, in our most fundamental physical theories, to descriptions of systems which are quantum mechanical. These systems can be analog or digital, but they are described in a quantum fashion. In particular, for probabilistic systems we described our configuration by a set of positive real numbers which sum to unity, while for quantum systems these numbers are replaced by amplitudes, complex numbers whose absolute value squared sum to unity. A quantum system is as simple as that: a system whose description is given by amplitudes and not probabilities.

Next Time…

Part II: Why is classical computation possible at all?

5 Replies to “Self-Correcting Quantum Computers, Part I”

  1. Very nice, Dave. The figures are really helpful. I have a couple of suggestions/corrections:
    Figure 1: Change to “deterministic or probabilistic” in the caption. Also, it would be clearer to switch the label to “Deterministic / Probabilistic” to match up with the directed graphs below.
    Figure 2: In my mind, “percent” means divide by 100. I would suggest using something like “likelihood” instead.

  2. Pedantry time. “Hare”-brained has the oldest attestation (see OED) as opposed to “hair”-brain. I also prefer the former, but that’s purely aesethetics…

  3. Concerning your final claim that “A quantum system is as simple as that: a system whose description is given by amplitudes and not probabilities.” The mutual noncommutativity of measurement operators (aka the algebraic structure of the algebra of observables), at least, should also be considered. A Hilbert space representation can be given, in particular, for a (classical) commutative algebra of observables. Furthermore, you do not mention the effects of quantum fluctuations on measurements, which may be significant [and btw can be modeled classically, although a certain sophistication is needed to ensure they are explicitly distinguished from thermal fluctuations — which have a different effect on measurement results].

Leave a Reply

Your email address will not be published. Required fields are marked *