Anyons in Honey

Alexei Kitaev has put a massive paper on the arXiv, cond-mat/0506438 describing a very interesting model with interacting spins on a honeycomb lattice. Looks like I’ve found my bus ride reading for the next month!

This This and This

When I read articles by creation scientists and there ilk I’m often filled with a feeling of “how the heck can you believe that, surely you’ve heard about this, this, and this.” Invariably they have none read of the “this”‘s. Yesterday I gave a talk at the Santa Fe Institute’s Complex Systems Summer School on quantum computation. And after the talk, I was having a discussion with someone about quantum computing. And they said they still thought quantum computers are not a valid model of computation. And I was thinking to myself, “But surely you’ve read this, this, this, this, this, and this? Just to name a few.” Never thought I’d be having these thoughts around another scientist.

Hyperbole Wednesdays

Two posts over at Quantum Algorithms, one on D-Wave Systems and another one on a supposed algorithm for solving NP-complete problems efficiently on a quantum computer.
When I was a graduate student at Berkeley, when a paper came out claiming that quantum computers could solve NP-complete problems, we would all race to see who could figure out where the problem in the paper was. I usually don’t like to write blog articles about papers like the one commented upon in Quantum Algorithms (for the preprint, see here), but I’m not about to get a blogger username to just comment on the site cus I’m lazy, so I thought I’d just stick my comments here. As far as I can tell the paper is incorrect. In particular if you examine Figure 3, the circuit described does not perform as described. If I understand correctly, the state after the oracle gate is [tex]${1 over 2^n} sum_{x,y=0}^{2^n-1}(-1)^{f(x)}|y>|x>$[/tex]. The author then applies a unitary gate between the y and x registers and then a measurement of the qubits. This is supposed to allow one to identify a place where [tex]$f(x)=1$[/tex], but as far as I can tell, one only gets an exponentially small information about such a location if there is only one marked item.
The other article on Quantum Algorithms points to an article in the MIT Technology Review about the Vancouver based D-Wave systems. Here the hyperbole is even more mysterious:

Vancouver startup D-Wave Systems, however, aims to build a quantum computer within three years. It won’t be a fully functional quantum computer of the sort long envisioned; but D-Wave is on track to produce a special-purpose, “noisy” piece of quantum hardware that could solve many of the physical-simulation problems that stump today’s computers, says David Meyer, a mathematician working on quantum algorithms at the University of California, San Diego.
The difference between D-Wave’s system and other quantum computer designs is the particular properties of quantum mechanics that they exploit. Other systems rely on a property called entanglement, which says that any two particles that have interacted in the past, even if now spatially separated, may still influence each other’s states. But that interdependence is easily disrupted by the particles’ interactions with their environment. In contrast, D-Wave’s design takes advantage of the far more robust property of quantum physics known as quantum tunneling, which allows particles to “magically” hop from one location to another.

It sounds to me, from the article, that the proposal is a quantum computer which implements a quantum adiabatic algorithm. The question of whether adiabatic quantum algorithms can be more efficient than classical algorithm is, from what I know, fairly controversial (see, for example, here.) I would have a hard time telling a venture capitalist to put money in something quite so controversial, but hey, what do I know, I’m just a lowsy academic nerd and a chicken. If they can sell solutions to hard problem instances and actually succeed, then what do I know! And they will be rich!
Update: Suresh says I’m lazy. And I am. So looking again at quant-ph/0506137, the main problem is already apparent in equation (5). There the author has miscontrued the tensor product. If we take the output of this circuit seriously, the initial state [tex]$|00rangle$[/tex] is transformed into [tex]$0$[/tex]. Not the state zero. The amplitude for the first qubit in this formula is zero.

Bashing Hashing

Recently there has been a lot of exciting work in cryptographic hash functions. Hash functions are functions which take some large (possibly infinite) domain, and map it to a smaller (usually of fixed size) range. Cryptographic hash functions are hash functions which are one-way and also collision free. One-way means that if H is the hash function, then, given H(x) it is computationally infeasible to determine x. Collision free means that if we are given H(x), then it is computationally infeasible to find a y not equal to x such that H(x)=H(y). Actually this is called weakly collision free. Stronly collision free is if it is computationally infeasible to find any two x and y not equal to each other such that H(x)=H(y). You may notice that there exist many collisions when the domain is larger than the range, but for a strongly collision free hash function, finding even a single collision is difficult.
One common use for cryptographic hash functions is in digital signatures. Instead of digitally signing the entire document, one almost always uses a hash function on this document and then digitally signs the resulting value. So if it is possible to construction documents which have the same hash value, then this whole scheme gets into trouble.
Recently two widely used cryptographic hash functions, MD5 and SHA, have been under attack. Work by a whole host of people, including papers by Wang and Yu, and Biham, Chen, Joux, Carribault, Lemuet, and Jalby has shown that it is possible to find collisions for these hash functions in computationally feasible times. But wait, you say, finding collisions doesn’t necessarily mean that we can’t use these attacks for the digital signature schemes we described above. You would think this because the results on breaking the cryptographic hash functions provide basically random collisions. For the specific task of creating a forged document with the same hash function, you might think these attacks don’t work. But you would be wrong! In fact, you can use these attacks to create forged documents with the same hash function. Magnus Duam and Stefan Lucks have a wonderful discussion and demonstration of this here. What is very cool is that they have two postscript files which both produce the same value after applying the MD5 hash function (a25f7f0b 29ee0b39 68c86073 8533a4b9, heh.) (Previously Kaminski and Mikle have also shown such capabilities.)
So…do you really trust MD5? I don’t.
Update: As Aram points out in the comments, this page has a cool list of hash functions and those that have been broken. Not a pretty picture.

Self-Correction

Sometimes you write a paper and think it’s all ready for submission and then after you submit it to the archive you find that it is lacking for quite a few reasons. On Friday I posted the paper quant-ph/0506023 (and did the new paper dance!) But after communications from Michael Nielsen and David Poulin, I realized that I had made a mistake in one of my claims (the proof I had did not work) and that I had very much misrepresented what is new in this paper (in particular in relationship to quant-ph/0504189 and quant-ph/0412076.) Luckily the mistake in my proof was not a big deal for the paper and also luckily one can correct one’s foolishness and clarify what’s new and interesting in the paper. Here is the updated title and abstract:
Operator Quantum Error Correcting Subsystems for Self-Correcting Quantum Memories
Authors: Dave Bacon
Comments: 17 pages, 3 figures, title change, rewrite of connection to operator quantum error correction, references added

The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error correcting codes, one requires the ability to apply a procedure which exactly reverses on the error correcting subspace any correctable error. In contrast, for operator error correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform correction only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error correction procedures.

Self Promotion of Self-Correcting Paper

Everybody do the new paper dance, quant-ph/0506023
Quantum Error Correcting Subsystems and Self-Correcting Quantum Memories
Authors: D. Bacon
Comments: 16 pages

The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. In this paper we use this fact to define subsystems with quantum error correcting capabilities. In standard quantum error correcting codes, one requires the ability to apply a procedure which exactly reverses on the error correcting subspace any correctable error. In contrast, for quantum error correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform correction only modulo the subsystem structure. Here we present two examples of quantum error correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error correction procedures.

APS Topical Group

The American Physical Society now has a new topical group, Quantum Information, Concepts, and Computation. Here is an article about the topical group. Some quotes

“It is meant to be a broadly inclusive home for researchers whose professional lives may have kicked off in various traditional disciplines, but who nonetheless share an over-arching interest in the foundations and ‘surprising implications’ of quantum mechanics,” said Caltech’s Hideo Mabuchi, GQI’s acting chair.

and

Greenberger also feels the field needs an effective lobbying group to represent its interests to federal funding agencies, most notably the National Science Foundation. “Many young people are becoming interested in the field, but there are few opportunities for having their research funded,” he said.
Part of the problem is that quantum theory suffers from the perception that it is a field for “old men,” since the debate dates back to 1935 and the famous Einstein-Podolsky-Rosen paradox. (That paper is still the most downloaded publication from the APS journal archives, 80 years after it was written.) But Greenberger points out that it is, in fact, a vibrant exciting field at the forefront of physics, using all the latest laboratory techniques, and spinning off the newly emerging fields of quantum cryptography and quantum computing.

Pigeons, Discrete Log, and Entropy

The pigeon hole principle states that if you put m pigeons into n<m holes, then there are at least two pigeons in one hole. Consider, for example, the following problem
Given a list [tex]$a_1,dots,a_n$[/tex] of residues mod [tex]$p$[/tex] and suppose [tex]$n>log_2 p$[/tex], find two distinct subsets [tex]$S_1,S_2 subset {1,2,dots,n}$[/tex] so that [tex]$prod_{i in S_1} a_i =prod_{i in S_2}a_j~{rm mod}~p$[/tex].
Here we have said “find”, but how do we even know that two subsets which solve this problem exist? The pigeon hole principle! The number of possible subsets of [tex] {1,2,dots,n}$[/tex] is [tex]$2^n$[/tex] (because each element either is or is not in the subset, and “is” and “is not” is binary, and so we have a binary string of length [tex]$n$[/tex].) But [tex]2^n>p[/tex], so we are putting more objects into our “holes” which run from [tex]$0$[/tex] to [tex]$p-1$[/tex]. Thus by the pigeon hole principle there must be two distinct subsets (two pigeons) whose product is the same.
In computational complexity the above problem belongs to a class of search problems called PPP, which, according to the complexity zoo, stands for “Polynomial Pigeonhole Principle.” These are problems where we are searching for a solution to an instance of a problem, this solution can be verified to be a solution in polynomial time, and the existence of a solution is guaranteed by a pigeon hole argument. The second of these should be familiar from the class NP, and the first of these is what you’re doing when you go beyond decision problems and actually want to find a solution.
PPP is interesting for various reasons, but one of the reasons I know of it is because it is at least as hard as the discrete log problem. In the discrete log problem is given two elements [tex]$a,b in {mathbb Z}_p$[/tex] find the smallest [tex]$s$[/tex] such that [tex]$a^s=b~{rm mod}~p$[/tex]. One of the neat things about a quantum computer is that it can efficiently solve the discrete log problem, whereas there is no known efficient classical algorithm.
So what do we know about the power of quantum computers to solve problems in PPP? Well we know a little. We know that a naive approach to solving these problems will fail. Suppose that we try to solve the above PPP problem by querying the function [tex]$f_{a_1,dots,a_n}(b_1,dots,b_n)=a_1^{b_1} a_2^{b_2} dots a_n^{b_n}$[/tex], where [tex]$b_i in {mathbb Z}_2[/tex]. Then a result by Scott Aaronson says that using a quantum algorithm requires [tex]$Omega((2^n)^{1/5})$[/tex] queries of this function (later improved to [tex]$Omega((2^n)^{1/3})$[/tex] by Yaoyun Shi) to solve this problem. So this naive approach to attacking problems in PPP fails. An interesting open question remains, however, whether there is a more sophisticated approach to efficiently solving problems in PPP using a quantum computer.
Interestingly, the problem Scott and Yaoyun work on is also relavent to a physicist in the lab. The problem they consider is called the collision problem. Suppose you have a function [tex]$f$[/tex] from [tex]${mathbb Z}_N$[/tex] to [tex]${mathbb Z}_N$[/tex] which is promised to be either 1-to-1 or 2-to-1 and the problem is to distinguish between these two cases. Scott and Yaoyun’s result says that if you do this you need to query this function [tex]$Omega((2^n)^{1/3})$[/tex] times (and, it turns out that there is a quantum algorithm which uses this number of queries) to distinguish between these two cases. Now how does this have relevance to a physicist? Suppose we follow the standard query method and query the function to produce the state [tex]$N^{-1/2} sum_{x in {mathbb Z}_N} |x>|f(x)>$[/tex]. Discarding the second register produces a mixed state of rank [tex]$N$[/tex] if [tex]$f$[/tex] is 1-to-1 but of rank [tex]$N/2$[/tex] if [tex]$f$[/tex] is 2-to-1. The entropy of these states is thus either [tex]$log_2 N$[/tex] or [tex]$log_2 N-1$[/tex] respectivally. Now suppose you are a physicist in the lab and you want to measure the entropy of a quantum state which you can prepare multiple copies of. You might be interested in how many copies you will need to measure this entropy. Scott and Yaoyun’s result then imply that you will need to perform a number of experiments which is at least [tex]$Omega(d^{1/3})$[/tex] such experiments to measure the entropy, where [tex]$d$[/tex] is the dimension of your quantum states. This is bad news if you really want to measure the entropy of a many-qubit system! In a related manner one can think of the period finding function of Shor’s algorithm as a method for determining the entropy for special class of states which have a particular promise (that they are periodic).
Discrete log is a pigeon problem, naive quantum algorithms for pigeon problems fail, and pigeon problems put limits on how efficiently we can measure the entropy of a system. It’s topics like these, which spread from computer science all the way to relevance for a experimental physics, which make me happy to be working in quantum information science.

It's Quantum So Should We Be Surprised?

Over the past few years there have been quite a few results where methods of quantum computing have been used to prove or reprove results about classical objects. Among these are lower bounds on locally decodable codes (de Wolf, Kerenidis, Wehner), limitations on classical algorithms for local search problems (Aaronson), a proof that PP is closed under intersection (Aaronson), and ways to use quantum communication complexity to prove lower bounds on classical circuit depths (Kerenidis). Now we have more to add to this mix, with quant-ph/0505188, “Lower Bounds on Matrix Rigidity via a Quantum Argument” by Ronald de Wolf (CWI Amsterdam). The rigidity of a matrix is the number of entries in a matrix which need to be changed in order to bring the rank of the matrix down to a certain value. What Ronald does in this paper is show how to use a quantum argument to prove some lower bounds on the rigidity of the Hadamard matrix. Very nice!
Whenever I see one of these quantum proofs of classical problems, I think, well: why are these quantum proofs significantly or slightly easier? But perhaps this thinking is backwards. Because the universe is quantum mechanical, should we really be surprised that the language of this theory is so powerful? One often hears the statement that it is amazing that mathematics is so useful for describing the physical world. But really this disregards the fact that different parts of mathematics are and are not useful for describing the physical world. So in a realm like computer science, where you are explicitly reasoning about physical systems, should it really be a surprise that the mathematics of these physical systems (quantum theory) is the best way to attack the problems?