FOCS Time

The 46th Annual IEEE Symposium on Foundations of Computer Science (a.k.a. FOCS) submission deadline was last Friday. Not surprisingly this means that there are more than the average number of interesting papers being posted on the Tuesday listing of the arXiv.
Of course, I will start by doing a little shameless self-promotion. Here is the latest and greatest from myself, Andrew Childs (Caltech) and Wim van Dam (UCSB) (everyone do the new paper dance) quant-ph/0504083:

From optimal measurement to efficient quantum algorithms for the hidden subgroup problem over semidirect product groups
Dave Bacon, Andrew Childs, and Wim van Dam
We approach the hidden subgroup problem by performing the so-called pretty good measurement on hidden subgroup states. For various groups that can be expressed as the semidirect product of an abelian group and a cyclic group, we show that the pretty good measurement is optimal and that its probability of success and unitary implementation are closely related to an average-case algebraic problem. By solving this problem, we find efficient quantum algorithms for a number of nonabelian hidden subgroup problems, including some for which no efficient algorithm was previously known: certain metacyclic groups as well as all groups of the form (Z_p)^r X| Z_p for fixed r (including the Heisenberg group, r=2). In particular, our results show that entangled measurements across multiple copies of hidden subgroup states can be useful for efficiently solving the nonabelian HSP.

The nonabelian hidden subgroup problem is one of the most frustratingly well studied problems in the theory of quantum algorithms. This is because an efficient algorithm for the hidden subgroup problem over the symmetric group would lead to an efficient algorithm for the graph isomorphism problem (the graph isomorphism problem is telling if two graphs that I present to you by their adjacency matrices are really the same graph under some relabeling of the vertices.) To this end, there are many positive and many negative results about this problem. One thing we do know is that the query complexity of the problem is efficient. We just don’t know how to turn this into efficient algorithms. In our paper we show, in particular for certain groups which include the Heissenberg group, that there are multi-query quantum algorithms which make explicit use of the numerous queries to produce an efficient quantum algorithm. Certainly I take this as evidence that there is much left to be understood about how exactly we can achieve exponential speedups of quantum over classical computers.
Another interesting component to this algorithm is that it shows how an efficient quantum algorithm for the hidden subgroup problems we consider can be achieved by producing an efficient quantum algorithm for certain algebraic problems. This was also true for the case of the dihedral hidden subgroup problem, where, in that case, the algebraic problem turned out to be the subset sum problem. Since this problem is NP-compete, this didn’t lead to an efficient algorithm. But for the groups we consider in our new paper, the problem is efficiently solvable classically, and we are able to bootstrap this onto a quantum algorithm to achieve the speedup.
Another interesting, and closely related paper, is the paper by Cris Moore (University of New Mexico…no “h!”) and Alex Russell (University of Connecticut), quant-ph/0504067:

Bratteli Diagrams and Subexponential Time Quantum Algorithms for Hidden Subgroup Problems: or, Fourier Sampling Strikes Back
Cristopher Moore and Alexander Russell
We present an explicit measurement in the Fourier basis that solves an important case of the Hidden Subgroup Problem, including the case to which Graph Isomorphism reduces. This entangled measurement uses $k=log_2 |G|$ registers, and each of the $2^k$ subsets of the registers contributes some information. We then give a general framework using the Bratteli diagram of the group for constructing worst-case to average-case reductions from the HSP to generalized Subset Sum problems analogous to those of Regev for the dihedral group. As a result, we obtain subexponential-time quantum algorithms for the hidden subgroup problem on a number of new group families, including the Heisenberg and affine groups. Our framework also yields subexponential-time algorithms for finding the order of hidden subgroups in all solvable groups of low degree, including all nilpotent groups.

Here, Cris and Alex present a large number of subexponential time algorithms for numerous groups (including the Heisenberg group for which were able to find a polynomial time algorithm.) They are able to do this by showing how these algorithms can arise from the classical solution to a problem which arises from the representation theory of these groups. Again, what is essential in these algorithms is the use of multiple quantum queries.
As you can see, there seems to be quite a revival going on in research on the hidden subgroup problem. There is a lot of structure which many of us are starting to see! Fun times.
Along different lines, Ran Raz (Weizmann Institute) also has a very intriguing paper, quant-ph/0504075:

Quantum Information and the PCP Theorem
Ran Raz
We show how to encode $2^n$ (classical) bits $a_1,…,a_{2^n}$ by a single quantum state $|Psi>$ of size O(n) qubits, such that: for any constant $k$ and any $i_1,…,i_k in {1,…,2^n}$, the values of the bits $a_{i_1},…,a_{i_k}$ can be retrieved from $|Psi>$ by a one-round Arthur-Merlin interactive protocol of size polynomial in $n$. This shows how to go around Holevo-Nayak’s Theorem, using Arthur-Merlin proofs.
We use the new representation to prove the following results:
1) Interactive proofs with quantum advice: We show that the class $QIP/qpoly$ contains ALL languages. That is, for any language $L$ (even non-recursive), the membership $x in L$ (for $x$ of length $n$) can be proved by a polynomial-size quantum interactive proof, where the verifier is a polynomial-size quantum circuit with working space initiated with some quantum state $|Psi_{L,n} >$ (depending only on $L$ and $n$). Moreover, the interactive proof that we give is of only one round, and the messages communicated are classical.
2) PCP with only one query: We show that the membership $x in SAT$ (for $x$ of length $n$) can be proved by a logarithmic-size quantum state $|Psi >$, together with a polynomial-size classical proof consisting of blocks of length $polylog(n)$ bits each, such that after measuring the state $|Psi >$ the verifier only needs to read {bf one} block of the classical proof.
While the first result is a straight forward consequence of the new representation, the second requires an additional machinery of quantum low-degree-test that may be interesting in its own right.

I haven’t had time to work through this paper in detail, but one of the main results, that QIP(2) (one round Quantum Interactive Proofs) with polynomial quantum advice (a polynomial sized quantum state) contains all languages seems to me to be very surprising. I know that previously Scott Aaronson had shown that BQP (i.e. your standard efficient quantum algorithms) with polynomial quantum advice was in PP (probabilistic polynomial time..the worst named complexity class ever!) with polynomial classical advice, but the jump to all languages seems to me to be quite a huge jump! Well I certainly have some reading to do!
Finally there is a paper by Harry Buhrman (University of Amsterdam, CWI Amsterdam), Matthias Christandl (CQC Cambridge), Patrick Hayden (McGill), Hoi-Kwong Lo (University of Toronto) and Stephanie Wehner(CWI Amsterdam), quant-ph/0504078:

On the (Im)Possibility of Quantum String Commitment
Harry Buhrman, Matthias Christandl, Patrick Hayden, Hoi-Kwong Lo, and Stephanie Wehner
Unconditionally secure non-relativistic bit commitment is known to be impossible in both the classical and quantum worlds. However, when committing to a string of n bits at once, how far can we stretch the quantum limits? We consider quantum schemes where Alice commits a string of n bits to Bob, in such a way that she can only cheat on a bits and Bob can learn at most b bits of ”information” before the reveal phase. We show a negative and a positive result, depending on how we measure Bob’s information. If we use the Holevo quantity, no good schemes exist: a+b is at least n. If, however, we use accessible information, there exists a scheme where a=4 log n+O(1) and b=4. This is classically impossible. Our protocol is not efficient, however, we also exhibit an efficient scheme when Bob’s measurement circuit is restricted to polynomial size. Our scheme also implies a protocol for n simultaneous coin flips which achieves higher entropy of the resulting string than any previously known protocol.

Bit commitment is something like the cryptographic version of a securely sealed safe. Alice and Bob begin by Alice supplying Bob data which commits Alice to a bit, but does not reveal the value of Alice’s bit to Bob. Later, when they wish to reveal the commitment, Alice sends some information to Bob such that Bob can infer what value Alice commited to. Unconditional security would mean that there is no way for Alice to change her commitment after she has communicated to Bob without Bob being able to detect the commitment. Similarly Bob should not be able to learn anything about Alice’s bit. It is known (or I should say there is a proof with what seem like reasonable foundations) that in the standard quantum information setting, unconditionally secure quantum bit commitment is not possible. But what about the problem of not committing to a bit, but committing to a string? Now Alice can cheat on some bits, and Bob can learn some bits, but what exactly is the relationship between these two quantities? Interestingly the authors show that in the case where the measure of information about how much Bob learns is the accessible information there are quantum schemes which are impossible classically. Nothing like a result showing that quantum information kicks classical information’s rear end!
So, by my count, on Tuesday we had two algorithms papers demonstrating new speedups, a complexity theory paper demonstrating the amazing power of quantum advice for quantum interactive proofs, and a cryptographic paper demonstrating a new sepeartion between quantum and classical cryptography. Not a bad day for the arXivs!

Snarky Mode On

Robert Laughlin, Nobel prize winner for the theory behind the fractional quantum Hall effect, has a new book out, “A Different Universe.” The book is interesting, but it also has its problems. As you might guess from previous posts I’ve made, Professor Laughlin has a amazing view of quantum computers:

There is a great deal of interest lately in the quantum computer, a fundamentally new kind of computational hardware that would exploit the entanglement of the quantum wave function to perform calculations presently impossible with conventional computers. The most important of these is the generation of enormous primes numbers and the quick factorization of other enormous numbers. The impossibility of factoring a number that is the product of two large primes in reasonable time with conventional computers is the basis of modern cryptography. However, quantum computation has a terrible Achilles heel that becomes clear when one confronts the problem of reading out the answer: the effects that distinguish quantum computers from conventional ones also cause quantum indeterminism. Quantum-mechanical wave functions do indeed evolve deterministically, but the process of turning them into signals people can read generates errors. Computers that make mistakes are not very useful, so the design issue in quantum computation that counts is overcoming mistakes by measurement. A textbook method for doing this is to place a million copies of the same experiment in a small box and measure something they do collectively-generating oscillating magnetic fields, for example, as occurs in a quantum computer built with electron spins. The damage inflicted by the measurement process then affects only a few copies, leaving the rest intact. This trick is so powerful that variations of it enable you to read out the entire wave function of any quantum computer, at least in principle. However a logical implication is that you have created not a fabulous new kind of digital computer but a conventional analogue computer-a type of machine we do not use in the modern era because it is so easily disrupted by noise. Thus the frenzy over quantum computing misses the key point that the physical basis of computational reliability is emergent Newtonianness. One can imagine doing a computation without exploiting these principles, just as one can imagine proving by brute force that broken symmetry occurs, but a much more likely outcome is that eliminating computational mistakes will prove to be fundamentally impossible because its physical basis is absent. The view that this problem is trivial is a fantasy spun out of reductionist beliefs. Naturally, I hope I am wrong, and I wish those who invest in quantum computing the best of luck. I also ask that anyone anxious to invest in a bridge in lower Manhattan contact me right away, for there will be discounts for a limited time only.

Wow. Can I really read this and not respond? I just can’t resist. And especially I just can’t resist snarking. I should apologize for what I’m about to write, but my feeble mind just can’t take it. I just can’t take it anymore! So here are my suggestions for Laureate Laughlin:

1. Please read J. von Neumann’s, “Probabilistic Logics and the Synthesis of Reliable Organism from Unreliable Components.” (1956) The basis of computer reliability has absolutely nothing to do with “Newtonianness”. The basis of conventional computer reliability has to do with redudancy, and more physically with the thermodynamics of many condensed matter systems.
2. After you’ve mastered the most fundamental ideas of fault tolerance, it might be useful to understand the ideas behind error correction. Please read C. Shannon’s “A Mathematical Theory of Communication” (1948). Sure we are going backwards in time, but I think it’s important for you to realize that redundancy (“place a million copies”) is not the only way to encode information. Indeed this fact will become very important as we move on to step 3.
3. Now you’re ready for the big stuff. I know you know quantum theory like the back of your hand, so this step will be easier for you than for many others. Please read John Preskill’s “Fault-tolerant Quantum Computation.” See how the previous two ideas, when slightly modified for the quantum world, lead to a theory of fault-tolerant quantum computers. Isn’t that amazing? I consider it to be one of the most important results in physics in the last thirty years, but you’re older and wiser, so you may feel free to downgrade it. But please don’t desecrate what you haven’t had the time to understand. Quantum error correcting degrees of freedom are most distinctively not the simplistic collective degrees of freedom which you insinuate (“oscillating magnetic fields.”) The idea is more subtle, and thus, I believe more beautiful. While beauty is in the eye of the beholder, you must surely admit that your solution to the fractional quantum Hall effect is only beautiful when you have the background to understand the theory, and so too, quantum fault-tolerance is beautiful, but only once you’ve sat down and understood the theory.
Oh, and by the way, the generation of large prime numbers is easy, not hard, for conventional computers (but you did better than the string theorist Michio Kaku, who I once saw on T.V. claim that quantum computers were amazing because they could efficiently multiply large numbers.)

Quantum Jobs

Todd Brun has put together a list of faculty openings and postdoctoral positions available in quantum information processing. See this page. In a related note, but not yet listed on Todd’s website, Australia continues its battle with Canada for the center of the quantum computing universe with various job openings listed on Michael Nielsen’s blog here, here, and short visiting positions here.

A Messy Room Encodes One Bit

How do we store information? One way is to use a magnetic media, like as is done in our hard drive, where the information is encoded into the total magnetization of a group of spins. Another way is to use a capacitor and transistor to store information into the charge on the capacitor.
Now researchers at Philips Research Laboratories in Eindhoven, the Netherlands ( Lankhorst M. H. R., Ketelaars B. W. S. M. M. & Walkters R. A. M. Nature Materials published online, doi: 10.1038/nmat1350 (1968)) have created a storage medium in which the information is stored in a very strange way. Instead of the information being encoded into the total charge or the magnetization, their information is encoded into the degree of freedom describing whether the media they have is ordered or disordered. The idea of storing information in the ordered versus disordered phase has been around for a long time (such devices are called “Ovonic”) but apparently this new research is the first really viable realization of such a device.
The researchers use antimony telluride, which is naturally in an amorphus state with many of the atoms of the material all jumbled around. A small electral pulse however, will turn this state into an ordered states with the atoms lined up in a crystaline structure. A larger electral pulse (more voltage), however, will melt this crystaline structure and return the system into the disordered jumble of atoms. The state of the system can be read out by measuring the resistance across the material (the ordered phase will have a much lower resistance.) Thus we can store our binary 0 in the ordered phase and our binary 1 in the disordered phase, read out this information, and also write this information.
Which makes me wonder which other order parameters in statistical physics can be used to store information? Can we store information in the two phases of a metal being superconducting and just regularly conducting? How about in fluid-superfluid transition? OK, both of these are totally not practical, but maybe there is an interesting order parameter which we are missing but which would make an amazingly robust and fast storage device?

Optimality Feels Good

New paper! New paper! Let’s all do the new paper dance. Posted to the arxiv today: quant-ph 0503047. This paper is a revision of an earlier paper, where now it is shown that the protocol in the earlier paper is in fact optimal.
Optimal classical-communication-assisted local model of n-qubit Greenberger-Horne-Zeilinger correlations
Authors: Tracey E. Tessier, Carlton M. Caves, Ivan H. Deutsch, Dave Bacon, Bryan Eastin
Comments: This submission supersedes quant-ph/0407133. It is a substantially revised version of the previous document and now includes an optimality proof of our model

We present a model, motivated by the criterion of reality put forward by Einstein, Podolsky, and Rosen and supplemented by classical communication, which correctly reproduces the quantum-mechanical predictions for measurements of all products of Pauli operators on an n-qubit GHZ state (or "cat state"). The n-2 bits employed by our model is shown to be optimal for the allowed set of measurements, demonstrating that the required communication overhead scales linearly with n. We formulate a connection between the generation of the local values utilized by our model and the stabilizer formalism, which leads us to conjecture that a generalization of this method will shed light on the content of the Gottesman-Knill theorem.

Theory Like Software

One interesting issue in quantum information science is the lack of hiring of theory people by physics departments in the United States over the last few years (by my count, four hires in the last six years.) I think one of the main reasons for why this may be true is that physics departments are still skeptical over the field of quantum computing. For experimentalists, the hiring situation has been better (although, of course, no piece of cake by any measure!) The reason for this, it seems to me, is that experimental people in quantum information science are doing interesting experiments which push the boundaries of experimental physics. It is easier to justify hiring an experimentalist because there is some belief that they will be able to adjust if quantum information begins to fizzle out (I don’t believe it, but apparently a large number of physicists do.) In essence physics departments feel that they experimentalists are a better hedge than theorists.
But lets take a look at this issue from the perspective of the next few years. As quantum computers grow from four and five qubit computers to ten to a hunderd qubit computers, the experimentalists will be in some ways less tied to fundamental physics and more tied to the engineering and technology of the devices they will be building. And there is a another important factor: not all implementations of quantum computers are going to pay off. Thus while hiring an experimentalists who is working for the implementation which really takes off can be a jackpot for a department, hiring one who is working on the implementation which fizzles can leave the department in exactly the position they are supposedly avoiding by not hiring theorists.
Now look at this from the perspective of hiring a theorist. Quantum information theorists are much more immune to which implementation really takes off. Sure, some theorists are more tied to a particular implementation, but, on the other hand the main bulk of theory is done in a way which is independent of any quantum computing platform. Thus quantum information theorists, like those involved in the computer science of software, are in many ways a more robust hire in the long run than an experimentalist.
Of course, this argument is only a small part of the big picture (i.e. what does happen if the field is a fad? what if you do believe you can pick out the best implementation? what if you only care about hiring someone who will have an impact in the next five to ten years?) but it’s certainly an argument which I wish more physics departments would listen to.

From Outer Space

Back. Washigton was…sunny. Bet that was the last adjective you expected. Here is a copy of the talk I gave in the computer science and engineering department at the University of Washington (Warning, it’s 17 megs).

Fifty Dimensional Computers

Sometimes there are conflicts that run deeply in side of me because of my original training as a physicist. One thing I have never understood in classical computer science is why the locality and three dimensionality of our world don’t come into play in the theory of computational complexity. I mean sure, I can understand how you would like to divorce the study of the complexity of algorithms from the underlying medium. Sure I can also understand that those who study architectures spend copious amounts of time dealing with exactly how resources scale due to the constraints of locality and dimensionality. But isn’t it true that a true theory of the complexity of information processing should, at it’s most fundamental level, make reference to the dimensionality and connective of the space in which the computer is built? Perhaps the complexity of cellular automata gets some way towards this goal, but somehow it doesn’t feel like it goes all of the way. Most importantly the usual conditions of uniformity of the cellular automata seem to me to be overly restrictive of a theory of computational complexity which doesn’t ignore issues of dimensionality and locality.
Another interesting spinoff of this line of reasoning is to think about how computation changes when the topology of spacetime is not trivial and even when the topology of spacetime is itself something which can change with time. What is the power of a computer which can change the very notion of the topology of the underlying circuit connectivity?

Turing Awards

….and the winners of the 2004 Turing award are Vinton G. Cerf and Robert E. Kahn inventors of TCP (transmision-control protocol.) Needless to say the fact that you are reading this message at all says a lot about their impact.

The Rise of Quant-Ph

As a break from talk writing, I decided to take a look at the number of papers posted to the quant-ph archive versus the number of papers posted to the hep-th archive on the ArXiv.org Soon we (quant-ph) will take over the world, no?
The Rise of Quant-Ph
Pretty astounding that quant-ph is now almost as active as hep-th.