Blabbering Bacon

For those local to Seattle: Next week, on Tuesday, I’m giving a talk here at UW:

Towards Robust and Powerful Quantum Computers
Colloquium
Tuesday, April 18, 2006
3:30 pm, EE-105
Abstract
Today, a massive effort, spanning many hundreds of research groups coming from multiple disparate disciplines, is underway to build a robust large scale quantum computer. These groups are undertaking this task because the payoffs for building a quantum computer are large and because of a remarkable set of theoretical insights, collectively known as the theory of fault-tolerant quantum computation, which assures them that building a robust quantum computer is possible. In this talk I will discuss my research into the theory of fault-tolerant quantum computation as well as into the study of the algorithmic power of quantum computers. On this first topic, I will highlight methods for achieving fault-tolerant quantum computation which are remarkably similar to how fault-tolerance is achieved in classical computers. Such “self-correcting” quantum systems are best thought of as being the equivalent of the classical transistor which jump started the classical computer revolution. On the second topic of quantum algorithms, I will highlight the similarities and differences between quantum and classical computers and describe how these differences have lead to new quantum algorithms for classically hard problems.

4 Replies to “Blabbering Bacon”

  1. Folks from our UW Quantum System Engineering Group will be there, Dave, for sure!
    We are very interested in noise mechanisms, e.g., thermal dielectric and magnetic noise. Also, we are very interested in the emerging “geometric” point of view on quantum computing that Mike Nielsen has pointed out.
    We understand this emerging point of view as follows, no doubt imperfectly …
    We begin by remarking that many (or even most) engineering simulation problems are known to be in class NP-hard.
    http://web.mit.edu/jnt/www/complex.html
    Suppose we attempt to answer these same NP-hard questions by a quantum simulation, and we try to identify (as suggested in a Caltech summer school lecture by Mike Nielsen) a geometric obstruction to such simulations.
    E.g., we might hope to show that there is not enough total volume in the quantum phase space to hold the necessary foliation of some (carefully chosen) NP-hard calculation.
    An unusual feature of this PvsNP proof strategy is that the specified quantum error correction algorithms can be quite brutal–sufficient to extinguish any expectation of the quantum simulation having better efficacy than a classical simulation.
    But that’s OK, since the point is to find an NP calculation that is provably harder than P. Therefore, installing an over-aggressive, correlation-extinguishing quantum error correction scheme actually helps with this program! SImilarly, it is perfectly OK—even a good idea—to decline to import unlimited quantities of ancilla bits, since they enlarge a state space that our proof strategy attempts to restrict.
    It was Niels Bohr who said “The opposite of a great truth is another great truth.” Nielsen’s strategy implicitly suggests that a similar saying is true of engineering—even quantum engineering—“The opposite of a great engineering goal is another great engineering goal”?

  2. “I don’t think that quantum error correction “extinguishes” correlations. What makes you think that it does?”
    Hmmmm, what I said wasn’t very well-phrased. Even a classical computer has error correction built in, either explicitly via error-correction software (this is nowadays less common than it used to be), or implcitly by virtue of coupling a thermal reservoir (like a resistor) to the individual bits. These thermodynamically irreversible mechanisms ensure that once a bit is set, it stays set; this is the lowest-order kind of (continuous) error correction. However, these same reservoir-based correction mechanisms act to extinguish higher-order quantum correlations—this is what I meant by “over-aggressive” error correction.

  3. “Therefore, installing an over-aggressive, correlation-extinguishing quantum error correction scheme actually helps with this program!” I don’t think that quantum error correction “extinguishes” correlations. What makes you think that it does?

  4. Hi Dave — that was a great talk! And a very large and lively crowd too!
    I waited until the students were gone to talk with you, but when I came back, the camera crew told me that you and Mark had already departed. I figured you had gone for a latte, but this being Seattle, the question was which cafe? The “Reboot”? The “H-Bar”?, the “Allegro”? “Burke Museum”? “Tullys”? “Starbucks” “The HUB”? A person could roam forever in these local cafes.
    So here’s what I was going to say. As you spoke, I translated your talk into our quantum system engineering paradigm, in which every quantum system is an open quantum system -> every open quantum system is a problem in simulation -> every quantum simulation is a problem in model order reduction -> model order reduction is always achieved by Choi-tuning the Kraus operators such that all processes are synoptic measurement processes.
    This on-the-fly translation algorithm converted your ingenious thermodynamic error correction scheme into an equivalent cryptographic scheme–equally ingenious–which seeks to hide (perfectly) the quantum degeneracy of the ground state from covert observation by “Ms. Reservoir”.
    This suggests the following hypothesis. As Ms. Reservoir observes your system more and more strongly (equivalent to raising its temperature), the qubit ensemble eventually undergoes a phase change, for reasons you described very clearly. We expect, therefore, that the information that Ms. Reservoir collects will reflect this phase change.
    What happens near the critical point? Well, near a critical point order parameters have critical exponents, and one of the main glories of statistical mechanics is that these critical exponents can be calculated exactly, via renormalization group arguments.
    So the question is, can we define informatic order parameters associated with the information collected by Ms. Reservoir, and if so, what are the critical exponents of these order parameters?
    Of course, such order parameters are already known in, e.g., systems like 3-SAT. But as far as I know (but I am not an expert) no one has calculated critical exponents for these order parameters. Your work suggests that, with the help of a sufficiently ingenious mapping between thermodynamic reservoirs and covert observation, and with some luck, it might be possible to show that the condensed-matter physicists have already calculated some of these exponents.
    That’s all … as you can see, a very thought-provoking talk!

Leave a Reply

Your email address will not be published. Required fields are marked *