Self-Correction

Sometimes you write a paper and think it’s all ready for submission and then after you submit it to the archive you find that it is lacking for quite a few reasons. On Friday I posted the paper quant-ph/0506023 (and did the new paper dance!) But after communications from Michael Nielsen and David Poulin, I realized that I had made a mistake in one of my claims (the proof I had did not work) and that I had very much misrepresented what is new in this paper (in particular in relationship to quant-ph/0504189 and quant-ph/0412076.) Luckily the mistake in my proof was not a big deal for the paper and also luckily one can correct one’s foolishness and clarify what’s new and interesting in the paper. Here is the updated title and abstract:
Operator Quantum Error Correcting Subsystems for Self-Correcting Quantum Memories
Authors: Dave Bacon
Comments: 17 pages, 3 figures, title change, rewrite of connection to operator quantum error correction, references added

The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error correcting codes, one requires the ability to apply a procedure which exactly reverses on the error correcting subspace any correctable error. In contrast, for operator error correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform correction only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error correction procedures.

Self Promotion of Self-Correcting Paper

Everybody do the new paper dance, quant-ph/0506023
Quantum Error Correcting Subsystems and Self-Correcting Quantum Memories
Authors: D. Bacon
Comments: 16 pages

The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. In this paper we use this fact to define subsystems with quantum error correcting capabilities. In standard quantum error correcting codes, one requires the ability to apply a procedure which exactly reverses on the error correcting subspace any correctable error. In contrast, for quantum error correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform correction only modulo the subsystem structure. Here we present two examples of quantum error correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error correction procedures.

APS Topical Group

The American Physical Society now has a new topical group, Quantum Information, Concepts, and Computation. Here is an article about the topical group. Some quotes

“It is meant to be a broadly inclusive home for researchers whose professional lives may have kicked off in various traditional disciplines, but who nonetheless share an over-arching interest in the foundations and ‘surprising implications’ of quantum mechanics,” said Caltech’s Hideo Mabuchi, GQI’s acting chair.

and

Greenberger also feels the field needs an effective lobbying group to represent its interests to federal funding agencies, most notably the National Science Foundation. “Many young people are becoming interested in the field, but there are few opportunities for having their research funded,” he said.
Part of the problem is that quantum theory suffers from the perception that it is a field for “old men,” since the debate dates back to 1935 and the famous Einstein-Podolsky-Rosen paradox. (That paper is still the most downloaded publication from the APS journal archives, 80 years after it was written.) But Greenberger points out that it is, in fact, a vibrant exciting field at the forefront of physics, using all the latest laboratory techniques, and spinning off the newly emerging fields of quantum cryptography and quantum computing.

Pigeons, Discrete Log, and Entropy

The pigeon hole principle states that if you put m pigeons into n<m holes, then there are at least two pigeons in one hole. Consider, for example, the following problem
Given a list [tex]$a_1,dots,a_n$[/tex] of residues mod [tex]$p$[/tex] and suppose [tex]$n>log_2 p$[/tex], find two distinct subsets [tex]$S_1,S_2 subset {1,2,dots,n}$[/tex] so that [tex]$prod_{i in S_1} a_i =prod_{i in S_2}a_j~{rm mod}~p$[/tex].
Here we have said “find”, but how do we even know that two subsets which solve this problem exist? The pigeon hole principle! The number of possible subsets of [tex] {1,2,dots,n}$[/tex] is [tex]$2^n$[/tex] (because each element either is or is not in the subset, and “is” and “is not” is binary, and so we have a binary string of length [tex]$n$[/tex].) But [tex]2^n>p[/tex], so we are putting more objects into our “holes” which run from [tex]$0$[/tex] to [tex]$p-1$[/tex]. Thus by the pigeon hole principle there must be two distinct subsets (two pigeons) whose product is the same.
In computational complexity the above problem belongs to a class of search problems called PPP, which, according to the complexity zoo, stands for “Polynomial Pigeonhole Principle.” These are problems where we are searching for a solution to an instance of a problem, this solution can be verified to be a solution in polynomial time, and the existence of a solution is guaranteed by a pigeon hole argument. The second of these should be familiar from the class NP, and the first of these is what you’re doing when you go beyond decision problems and actually want to find a solution.
PPP is interesting for various reasons, but one of the reasons I know of it is because it is at least as hard as the discrete log problem. In the discrete log problem is given two elements [tex]$a,b in {mathbb Z}_p$[/tex] find the smallest [tex]$s$[/tex] such that [tex]$a^s=b~{rm mod}~p$[/tex]. One of the neat things about a quantum computer is that it can efficiently solve the discrete log problem, whereas there is no known efficient classical algorithm.
So what do we know about the power of quantum computers to solve problems in PPP? Well we know a little. We know that a naive approach to solving these problems will fail. Suppose that we try to solve the above PPP problem by querying the function [tex]$f_{a_1,dots,a_n}(b_1,dots,b_n)=a_1^{b_1} a_2^{b_2} dots a_n^{b_n}$[/tex], where [tex]$b_i in {mathbb Z}_2[/tex]. Then a result by Scott Aaronson says that using a quantum algorithm requires [tex]$Omega((2^n)^{1/5})$[/tex] queries of this function (later improved to [tex]$Omega((2^n)^{1/3})$[/tex] by Yaoyun Shi) to solve this problem. So this naive approach to attacking problems in PPP fails. An interesting open question remains, however, whether there is a more sophisticated approach to efficiently solving problems in PPP using a quantum computer.
Interestingly, the problem Scott and Yaoyun work on is also relavent to a physicist in the lab. The problem they consider is called the collision problem. Suppose you have a function [tex]$f$[/tex] from [tex]${mathbb Z}_N$[/tex] to [tex]${mathbb Z}_N$[/tex] which is promised to be either 1-to-1 or 2-to-1 and the problem is to distinguish between these two cases. Scott and Yaoyun’s result says that if you do this you need to query this function [tex]$Omega((2^n)^{1/3})$[/tex] times (and, it turns out that there is a quantum algorithm which uses this number of queries) to distinguish between these two cases. Now how does this have relevance to a physicist? Suppose we follow the standard query method and query the function to produce the state [tex]$N^{-1/2} sum_{x in {mathbb Z}_N} |x>|f(x)>$[/tex]. Discarding the second register produces a mixed state of rank [tex]$N$[/tex] if [tex]$f$[/tex] is 1-to-1 but of rank [tex]$N/2$[/tex] if [tex]$f$[/tex] is 2-to-1. The entropy of these states is thus either [tex]$log_2 N$[/tex] or [tex]$log_2 N-1$[/tex] respectivally. Now suppose you are a physicist in the lab and you want to measure the entropy of a quantum state which you can prepare multiple copies of. You might be interested in how many copies you will need to measure this entropy. Scott and Yaoyun’s result then imply that you will need to perform a number of experiments which is at least [tex]$Omega(d^{1/3})$[/tex] such experiments to measure the entropy, where [tex]$d$[/tex] is the dimension of your quantum states. This is bad news if you really want to measure the entropy of a many-qubit system! In a related manner one can think of the period finding function of Shor’s algorithm as a method for determining the entropy for special class of states which have a particular promise (that they are periodic).
Discrete log is a pigeon problem, naive quantum algorithms for pigeon problems fail, and pigeon problems put limits on how efficiently we can measure the entropy of a system. It’s topics like these, which spread from computer science all the way to relevance for a experimental physics, which make me happy to be working in quantum information science.

It's Quantum So Should We Be Surprised?

Over the past few years there have been quite a few results where methods of quantum computing have been used to prove or reprove results about classical objects. Among these are lower bounds on locally decodable codes (de Wolf, Kerenidis, Wehner), limitations on classical algorithms for local search problems (Aaronson), a proof that PP is closed under intersection (Aaronson), and ways to use quantum communication complexity to prove lower bounds on classical circuit depths (Kerenidis). Now we have more to add to this mix, with quant-ph/0505188, “Lower Bounds on Matrix Rigidity via a Quantum Argument” by Ronald de Wolf (CWI Amsterdam). The rigidity of a matrix is the number of entries in a matrix which need to be changed in order to bring the rank of the matrix down to a certain value. What Ronald does in this paper is show how to use a quantum argument to prove some lower bounds on the rigidity of the Hadamard matrix. Very nice!
Whenever I see one of these quantum proofs of classical problems, I think, well: why are these quantum proofs significantly or slightly easier? But perhaps this thinking is backwards. Because the universe is quantum mechanical, should we really be surprised that the language of this theory is so powerful? One often hears the statement that it is amazing that mathematics is so useful for describing the physical world. But really this disregards the fact that different parts of mathematics are and are not useful for describing the physical world. So in a realm like computer science, where you are explicitly reasoning about physical systems, should it really be a surprise that the mathematics of these physical systems (quantum theory) is the best way to attack the problems?

A Supporter

It’s always good to see the “quantum computing friendly” gain entry to high places:

Tony Hey to join Microsoft as Corporate Vice President
Professor Tony Hey is to join Microsoft Corporation as a corporate vice president to co-ordinate their Technical Computing Initiative. He will work across the company to co-ordinate Microsoft’s efforts to collaborate with the scientific community worldwide.
Currently Director of the UK’s e-Science Programme, Tony Hey has been a member of staff of the School of Electronics and Computer Science (ECS) since 1986, and was Head of School from 1994 to 1999.
He played a critical role in building the School into one of the UK’s leading academic departments, and has retained his Chair in the School throughout his period of secondment to UK e-Science.
‘This is wonderful news,’ said Professor Wendy Hall, Head of ECS, ‘and I am delighted to send our warmest congratulations to Tony on behalf of the School. His energy, vision, and sheer ability to make things happen will be of huge benefit to Microsoft and to future collaboration with researchers worldwide. At Southampton we are very glad that Tony will be retaining his Chair in the School of Electronics and Computer Science, and his strong links with the School and the University.’
Professor Hey is a Fellow of the Royal Academy of Engineering, the British Computer Society, the Institution of Electrical Engineers and a member of the Institution of Electrical and Electronic Engineers. In the New Year Honours List (2005) he received the CBE (Commander of the British Empire) for his services to science.
‘Today computation plays a critical role in advancing research across almost every field of science, yet far too often scientists must build all their own programming infrastructures and research their own algorithms to help advance their research effort,’ said Professor Hey. ‘By collaborating with the scientific community, I am confident that some of Microsoft’s advanced technologies can be used to accelerate their rate of discover.’
He has worked in the field of parallel and distributed computing since the early 1980s and was instrumental in the development of the MPI message-passing standard and in the Genesis Distributed Memory Parallel Benchmark suite. In 1991, he founded the Southampton Parallel Applications Centre (now the IT Innovation Centre), which has played a leading technology transfer role in Europe and the UK in collaborative industrial projects. His personal research interests are concerned with performance engineering for Grid applications but he also retains an interest in experimental explorations of quantum computing and quantum information theory.

For those who don’t remember, Tony Hey was the editor for “Feynman Lectures on Computation” and the companion volume entitled “Feynman and Computation.”

Quantum Documentary?

One question I’ve had for a long time is why there hasn’t been a good documentary produced about the field of quantum information science. There is certainly a feeling that quantum computing has leaked into the mainstream of geeks through various sci-fi stories, articles in Scientific American, and postings on Slashdot, and yet there hasn’t been a NOVA style documentary produced on the field in spite of the (I think) fascinating results which have emerged from the field in the last few decades (we’ve got to count quantum cryptography!)

Summer Teaching

This summer I’m teaching a course in the professional masters program here at UW:

CSE P 590 TU: Quantum Computing
Dave Bacon – Instructor
Day/Time: Wednesday 6:30-9:20 pm; Place: TBD
(First time offering.) An introduction to and survey of the field of quantum computing. Quantum computation is an emerging field whose goal is to design effectively atomic sized computers which exploit the parallelism of the quantum mechanical laws of the universe. While this sounds futuristic, quantum computers are fast becoming a reality, and have the potential to revolutionize computation over the next twenty years. Topics include quantum algorithms, quantum error correction, and quantum cryptography. This course will cover why quantum computers can break certain public key cryptosystems, the engineering challenges in building a physical quantum computing device, and the level of security assured by quantum crytopgraphic devices. Prior knowledge of quantum theory is not necessary.

I just put up a preliminary version of the course website. The one issue which is bugging me a lot as I try to think about this class is exactly how to deal with the nearly three hour length of these classes. I may be a funny looking guy, but am I really as entertaining as the extended versions of the Lord of the Rings?

Quantum CMOS?

If you go back and look at the early days of the semiconductor revolution, you see that there were many ideas for how to build gate sets out of n-type and p-type transistors. One set of ideas was known as NMOS technology and other set of ideas was known as CMOS technology. In almost every modern computer chip CMOS technology currently reigns supreme. One of the main reasons for this has to do with the power dissipation properties of these technologies. CMOS circuits dissipate a lot less energy than NMOS circuits. Basically one may think that CMOS circuits dissipate energy only when the circuit voltages are being switched wheras NMOS circuits are often running in a situation where they are dissipating energy even in between the switching of the voltages. As you can guess, this makes quite a bit of difference in the power consumption details of these devices.
So what is the equivalent technological detail for a quantum computer? Here the concept of the transistors used to build a circuit is modified to the concept of a fault-tolerant quantum gate. So if we look at fault-tolerant gate constructions and consider their energetics, are there technologies that minimize energy dissipation?

Digital, Analog, Deterministic, Probabilistic

After reading Robert Laughlin’s book and thinking about his remarks on quantum computing, I’ve decided I can pinpoint exactly the error which he is making in his disregard for the possibility of building a quantum computer. The mistake, I believe, is basically a category error.
So let’s talk about computers. Computers are inheritly physical devices, so all discussion of what is and what is not a computer boils down, at some level to physics (I’m not taking a reductionist viewpoint here: I include the study of emergent properties of physical systems in what I call physics.) Now there are two separations which I want to make when I talk about what types of computers we encounter out in the wilds of the labratory: the first is the a description of the state space and the other is a description of the dynamics.
The state space of a classical computer come in two forms: digital and analog. For digital computers the state space is a discrete, finite space. Such state spaces don’t exist in nature (as far as we know) but are great approximations for a great series of physical devices. Thus the hard drive on your computer can store the value of a bit in the orientation of a ton of magnetic spins, but if we try to store a bit into a single spin we have problems. Every discrete finite space I know of is discrete only after some coarse graining. But surely, you say, quantum mechanics gives discrete state spaces, right? The energy levels from atomic orbitals are discrete. But this is only true if we have an isolated atomic system which does not interact with the rest of the universe. Introduce coupling to an electromagnetic field (acting either as the environment or as a measuring device) and these discrete energy levels “smear out,” meaning in effect that we can only think about them as discrete after we have coarse grained over this smearing. Whether there are truely discrete state spaces in physics (say in the wavefunction of the universe), I think, is something for which we have little concrete evidence.
Back to computers. On the opposite end of the spectrum from a digital computer, it has been useful to introduce the notion of an analog computer. Here the state space of the computer is taken to be some continuous parameter. Thus, for example, the position of a the planet Mercury could be considered a state space variable for our “solar system” computer. If one takes seriously the idea of an analog computer one runs into all sorts of problems. This is because it is easy to couple analog state spaces such that the variables of these systems are manipulated in such a way as to perform elementary arithmetic on these variables. And if you take seriously the continuous nature of the state space this allows you to do amazing things like solve NP-complete (and harder!) problems efficiently on these computers. Of course there is a fatal reason why no one has built such a computer: if we introduce any noise into these variables or we introduce imprecission in our measurements, this power disappears. (There is still interest in analog computers not, then, for the computational power arising from the continuous nature of their state spaces, but because the transformations one can perform on them are sometimes difficult to perform on a digital computer. There doesn’t seem to be any superpolynomial speedup in any such devices, but one might get more efficient algorithms because the dynamics happens to do something which is not natural for your digital computer.)
OK, so state spaces are digital or analog. And analog state spaces do not lead to robust computing devices. Now we move on to the second category describing classical computers, the dynamics. The first type of dynamics I want to draw out is deterministic evolution. In this case the time evolution of the state space is perfectly fixed by the initial conditions of the state space. So we set up our planets in a particular configuration and Newton’s laws (or general relativity) takes over and pushes forward the analog computation. Similarly two transistors work to enact a NAND gate where the digital data is transformed in a fully deterministic manner. As with the idea of digital data, deterministic evolution is often a consequence of some emergent property of the system (i.e. every once in a while your computer does have a hardware error…but you’ll have to wait around quite a long time for this to happen with modern computers!)
In the distinction between digital and analog, we saw that physics essentially gave us analog state spaces which for certain physical system, when we apply the appropriate coarse graining, we end up with a digital device. What, then, is the dynamics from which deterministic evolution arises for computer? For classical computers, this dynamics is basically probabilistic evolution. Instead of being able to completely predict the outcome of the evolution of the state space, now we can only predict the probability of the evolution. Sometimes this evolution is so unpredictable (so noisy) that there is no way for such a device to function as anything resembling a deterministic computer. But sometimes this evolution is predictable enough to allow us to construct a robust effectively deterministic computer from the evolution (and in the case of our real world computers this is effectively done for us by the physics of the semiconductor transistors.) It even seems that using the inherit probabilistic nature of the evolution can increase the computational power of the device (although this is certainly one of the outstanding problems in computer science: whether randomness really gives one any computational advantage (the BPP=P question).) So sometimes we not only want the probabilistic nature of our computer to yield deterministic evolution, but often we want it to evolve probabilistically, but in a manner such that we control the probabilities with a high degree of precission.
Notice further that the type of dynamics for a classical computer effects how we describe the state space of the computer. If the evolution is deterministic, we can just keep track of the variables, but one we introduce probabilistic evolution, we need to talk about probability distribtuions (over a discrete state space in the digital case and over a continuos state space in the analog case.)
So what is the category error which makes Laughlin sceptical of quantum computing? It appears that he is failing to distinguish between dynamics that is not deterministic and analog state spaces. The argument is basically that the amplitudes which appear in quantum theory are an analog state space and thus the computational power of quantum computers is analogous to that of analog classical computers. But this is to confuse the dynamics of a quantum computer (unitary evolution of probability amplitudes) with the state space itself. Among the first discoveries about quantum computers (which I basically credit to Bernstein and Vazirani) was that the amplitudes of quantum states behave, for most purposes, like probabilities and not like an analog state space. Indeed the state space of a quantum system can be either discrete (think energy levels) or continuous (think position of a particle). But since the dynamics is that funny form of evolution called unitary evolution, the description of either of these systems is in terms of probability amplitudes. Confusing the probability amplitudes with the state space is directly analogous to confusing the probability distribution of a classical system with the state space of the system.
Now just as probabilistically evolving digital classical computer can be made to act deterministically, we can talk about getting a digital quantum computer (digital means the state space is finite and discrete) to act deterministically. Or better yet, just as sometimes we desired probabilistic evolution with a given accuracy, we can get a digital quantum computer to behave as a unitary evolving systems with a given accuracy. This, of course, is what the theory of fault-tolerant quantum computation is designed to do. Of course the world didn’t have to give us the ability to perform fault-tolerant quantum computation, but this doesn’t seem to be true.
State spaces are different from the dynamics and hence the way we describe the system. In fact, it’s one of the reasons I cringe when I hear the word “analog” in discussions of not just quantum computers, but also in the biological sciences. The probabilistic nature of biological information processing has got nothing to do with the analog or digital nature of the underlying information (so while concentrations are analog, they should for all intensive purposes be thought of as a finite size frequentist version of probabilities.)
OK, I’ve clearly written too much on this subject. I’ll leave you be now. But just to finishing things off here is an interesting argument to give to those who believe that quantum computers are analog devices. As I mentioned above it is known how to use analog computers to solve NP-complete problems efficiently. But it is widely suspected that quantum computers will not be able to solve NP-complete problems efficiently (which in turn is perhaps a reason to think long and hard about whether this really true: I waver on about on three year period as to whether I should spend more time attempting to solve NP-complete problems efficiently on a quantum computer.) The question to ask a sceptic, then, is why is it, if quantum computers are analog computers that they cannot solve NP-complete (or harder) problems? Surely this indicates that there is not direct analogy between analog and quantum computers or one of the thousands of people working in the field would surely have discovered such a connection! Well, the argument might not work, but I’d be curious to hear the variety of sceptics retorts.