FOCS Time

The 46th Annual IEEE Symposium on Foundations of Computer Science (a.k.a. FOCS) submission deadline was last Friday. Not surprisingly this means that there are more than the average number of interesting papers being posted on the Tuesday listing of the arXiv.
Of course, I will start by doing a little shameless self-promotion. Here is the latest and greatest from myself, Andrew Childs (Caltech) and Wim van Dam (UCSB) (everyone do the new paper dance) quant-ph/0504083:

From optimal measurement to efficient quantum algorithms for the hidden subgroup problem over semidirect product groups
Dave Bacon, Andrew Childs, and Wim van Dam
We approach the hidden subgroup problem by performing the so-called pretty good measurement on hidden subgroup states. For various groups that can be expressed as the semidirect product of an abelian group and a cyclic group, we show that the pretty good measurement is optimal and that its probability of success and unitary implementation are closely related to an average-case algebraic problem. By solving this problem, we find efficient quantum algorithms for a number of nonabelian hidden subgroup problems, including some for which no efficient algorithm was previously known: certain metacyclic groups as well as all groups of the form (Z_p)^r X| Z_p for fixed r (including the Heisenberg group, r=2). In particular, our results show that entangled measurements across multiple copies of hidden subgroup states can be useful for efficiently solving the nonabelian HSP.

The nonabelian hidden subgroup problem is one of the most frustratingly well studied problems in the theory of quantum algorithms. This is because an efficient algorithm for the hidden subgroup problem over the symmetric group would lead to an efficient algorithm for the graph isomorphism problem (the graph isomorphism problem is telling if two graphs that I present to you by their adjacency matrices are really the same graph under some relabeling of the vertices.) To this end, there are many positive and many negative results about this problem. One thing we do know is that the query complexity of the problem is efficient. We just don’t know how to turn this into efficient algorithms. In our paper we show, in particular for certain groups which include the Heissenberg group, that there are multi-query quantum algorithms which make explicit use of the numerous queries to produce an efficient quantum algorithm. Certainly I take this as evidence that there is much left to be understood about how exactly we can achieve exponential speedups of quantum over classical computers.
Another interesting component to this algorithm is that it shows how an efficient quantum algorithm for the hidden subgroup problems we consider can be achieved by producing an efficient quantum algorithm for certain algebraic problems. This was also true for the case of the dihedral hidden subgroup problem, where, in that case, the algebraic problem turned out to be the subset sum problem. Since this problem is NP-compete, this didn’t lead to an efficient algorithm. But for the groups we consider in our new paper, the problem is efficiently solvable classically, and we are able to bootstrap this onto a quantum algorithm to achieve the speedup.
Another interesting, and closely related paper, is the paper by Cris Moore (University of New Mexico…no “h!”) and Alex Russell (University of Connecticut), quant-ph/0504067:

Bratteli Diagrams and Subexponential Time Quantum Algorithms for Hidden Subgroup Problems: or, Fourier Sampling Strikes Back
Cristopher Moore and Alexander Russell
We present an explicit measurement in the Fourier basis that solves an important case of the Hidden Subgroup Problem, including the case to which Graph Isomorphism reduces. This entangled measurement uses $k=log_2 |G|$ registers, and each of the $2^k$ subsets of the registers contributes some information. We then give a general framework using the Bratteli diagram of the group for constructing worst-case to average-case reductions from the HSP to generalized Subset Sum problems analogous to those of Regev for the dihedral group. As a result, we obtain subexponential-time quantum algorithms for the hidden subgroup problem on a number of new group families, including the Heisenberg and affine groups. Our framework also yields subexponential-time algorithms for finding the order of hidden subgroups in all solvable groups of low degree, including all nilpotent groups.

Here, Cris and Alex present a large number of subexponential time algorithms for numerous groups (including the Heisenberg group for which were able to find a polynomial time algorithm.) They are able to do this by showing how these algorithms can arise from the classical solution to a problem which arises from the representation theory of these groups. Again, what is essential in these algorithms is the use of multiple quantum queries.
As you can see, there seems to be quite a revival going on in research on the hidden subgroup problem. There is a lot of structure which many of us are starting to see! Fun times.
Along different lines, Ran Raz (Weizmann Institute) also has a very intriguing paper, quant-ph/0504075:

Quantum Information and the PCP Theorem
Ran Raz
We show how to encode $2^n$ (classical) bits $a_1,…,a_{2^n}$ by a single quantum state $|Psi>$ of size O(n) qubits, such that: for any constant $k$ and any $i_1,…,i_k in {1,…,2^n}$, the values of the bits $a_{i_1},…,a_{i_k}$ can be retrieved from $|Psi>$ by a one-round Arthur-Merlin interactive protocol of size polynomial in $n$. This shows how to go around Holevo-Nayak’s Theorem, using Arthur-Merlin proofs.
We use the new representation to prove the following results:
1) Interactive proofs with quantum advice: We show that the class $QIP/qpoly$ contains ALL languages. That is, for any language $L$ (even non-recursive), the membership $x in L$ (for $x$ of length $n$) can be proved by a polynomial-size quantum interactive proof, where the verifier is a polynomial-size quantum circuit with working space initiated with some quantum state $|Psi_{L,n} >$ (depending only on $L$ and $n$). Moreover, the interactive proof that we give is of only one round, and the messages communicated are classical.
2) PCP with only one query: We show that the membership $x in SAT$ (for $x$ of length $n$) can be proved by a logarithmic-size quantum state $|Psi >$, together with a polynomial-size classical proof consisting of blocks of length $polylog(n)$ bits each, such that after measuring the state $|Psi >$ the verifier only needs to read {bf one} block of the classical proof.
While the first result is a straight forward consequence of the new representation, the second requires an additional machinery of quantum low-degree-test that may be interesting in its own right.

I haven’t had time to work through this paper in detail, but one of the main results, that QIP(2) (one round Quantum Interactive Proofs) with polynomial quantum advice (a polynomial sized quantum state) contains all languages seems to me to be very surprising. I know that previously Scott Aaronson had shown that BQP (i.e. your standard efficient quantum algorithms) with polynomial quantum advice was in PP (probabilistic polynomial time..the worst named complexity class ever!) with polynomial classical advice, but the jump to all languages seems to me to be quite a huge jump! Well I certainly have some reading to do!
Finally there is a paper by Harry Buhrman (University of Amsterdam, CWI Amsterdam), Matthias Christandl (CQC Cambridge), Patrick Hayden (McGill), Hoi-Kwong Lo (University of Toronto) and Stephanie Wehner(CWI Amsterdam), quant-ph/0504078:

On the (Im)Possibility of Quantum String Commitment
Harry Buhrman, Matthias Christandl, Patrick Hayden, Hoi-Kwong Lo, and Stephanie Wehner
Unconditionally secure non-relativistic bit commitment is known to be impossible in both the classical and quantum worlds. However, when committing to a string of n bits at once, how far can we stretch the quantum limits? We consider quantum schemes where Alice commits a string of n bits to Bob, in such a way that she can only cheat on a bits and Bob can learn at most b bits of ”information” before the reveal phase. We show a negative and a positive result, depending on how we measure Bob’s information. If we use the Holevo quantity, no good schemes exist: a+b is at least n. If, however, we use accessible information, there exists a scheme where a=4 log n+O(1) and b=4. This is classically impossible. Our protocol is not efficient, however, we also exhibit an efficient scheme when Bob’s measurement circuit is restricted to polynomial size. Our scheme also implies a protocol for n simultaneous coin flips which achieves higher entropy of the resulting string than any previously known protocol.

Bit commitment is something like the cryptographic version of a securely sealed safe. Alice and Bob begin by Alice supplying Bob data which commits Alice to a bit, but does not reveal the value of Alice’s bit to Bob. Later, when they wish to reveal the commitment, Alice sends some information to Bob such that Bob can infer what value Alice commited to. Unconditional security would mean that there is no way for Alice to change her commitment after she has communicated to Bob without Bob being able to detect the commitment. Similarly Bob should not be able to learn anything about Alice’s bit. It is known (or I should say there is a proof with what seem like reasonable foundations) that in the standard quantum information setting, unconditionally secure quantum bit commitment is not possible. But what about the problem of not committing to a bit, but committing to a string? Now Alice can cheat on some bits, and Bob can learn some bits, but what exactly is the relationship between these two quantities? Interestingly the authors show that in the case where the measure of information about how much Bob learns is the accessible information there are quantum schemes which are impossible classically. Nothing like a result showing that quantum information kicks classical information’s rear end!
So, by my count, on Tuesday we had two algorithms papers demonstrating new speedups, a complexity theory paper demonstrating the amazing power of quantum advice for quantum interactive proofs, and a cryptographic paper demonstrating a new sepeartion between quantum and classical cryptography. Not a bad day for the arXivs!

Caltech 6, MIT 1

I went to Caltech (for God’s sake, not “CalTech”) as an undergrad to be a prankster. Seriously. Along the way, I may have also learned a thing or two. For many Techers during my stay at Caltech, the ever increasing threats of litigation against us merry pranksters and the lack of the administration being supportive of our dangerous acts of creativity were quite a dissapointment. So it’s great to see that the pranking days of Caltech are far from over. This last week was MIT’s prefrosh weekend. MIT, of course, is that other Tech school, who seems to think they are in some way superiour to the paradise known as Caltech. Seeing as how such clearly misguided arrogance should never be allowed to stand, some Caltech students decided to set the record straight with a series of pranks (which MIT students, in some blatant neural misfunction, label “hacks”.) The Techers distributed shirts to the MIT prefrosh which read “Because not everybody can go to Caltech.” They turned a large “MASSACHVSETTS INSTITUTE OF TECHNOLOGY” sign (which, by the way, clearly shows signs that at MIT you don’t need to know how to spell to get into the school) into a “THAT OTHER INSTITUTE OF TECHNOLOGY.” They also used a laser display to display the letters “C-A-L-T-E-C-H” on the top of a building. Pictures are here.
Is this the beginning of a renewed rivalry between these two elite schools? Will students again start flaming (that’s Tech speak for failing out) because of hours spent planning revenge? We can only hope so.

The Cost of Geodesics

Michael Nielsen has a nice post about the geodesic equation when there is a cosmological constant. This reminded me of an interesting property of geodesics in general relativity.
What Michael showed was how you can obtain the equation of geodesic motion in quick and easy steps, once you assume a form for the stress-energy tensor. A simple question to ask is to ask whether you can obtain the geodesic equation without such an assumption. And this is exactly what Einstein, Infield, and Hoffman showed, in 1938 (Annals of Mathematics, 39, p. 62). What these authors showed was that if you introduce singularities in the field equations, these singularities indeed follow geodesics. In these models, the geodesic equation indeed comes for “free” without any assumption about the stress-energy tensor.
It’s interesting to consider the analogous situation for Maxwell’s equations and for the equations of motion in a non-viscous fluid. Maxwell’s equations in vacuum, when we add singularities to the fields, do not lead properly to the equations of motion for charged particles. On the other hand, for the equation of motions for vortices, i.e. singularities, in a non-viscous fluid are determined by the equations of motion for a non-viscous fluid alone. It seems that the essential reason why this works is that the former equations are linear, while the later equations are non-linear.
Oh, and another interesting fact about the Einstein, Infeld, Hoffman derivation is that it doesn’t give a sign for the mass of the singularity: it gives no reason why gravitation is always attractive!

Moving

Life is what happens when you’re busy making other plans – John Lennon

Over a year ago, my father passed away. A few weeks after this happened, my mom and I had to take my sister to San Francisco to see about the deteriorating condition of my sister’s kidney. Cathy, my sister, is handicapped, having some sort of syndrome which doctors have really never been able to identify (she is about three feet tall, has one side bigger than the other, and is the joy of my family.) Over the next few months, the doctors we saw became more and more concerned about her kidney until a little over half a year ago, she had a tube inserted in her abdominal cavity and went onto dialysis.
My mom and my sister live in the town where I grew up, Yreka, California, a small town of around 7000 people in very northern California. Because of my sister’s size, the doctors she needed where pediatric nephrologists. Needless to say there aren’t any such specialists in Yreka, the closest being located in San Francisco (five hours south) and in Portland (five hours north.) The stress of taking care of my sister (hooking her up to the dialysis machine, worrying about infection, etc.) was (and is) quite a stress on my mother, especially with my dad just recently passing away. (Oh, and if you want to more bad news on top of all this death and kidney failure, my mom had to put down our family dog during the same time period. When it rains, it pours. On us it poored ugly mad muck.)
So while all of this was happening, I realized that I needed to get closer to Yreka or to get my mom and sister closer to me. There was also the problem that living in Yreka was looking like less and less of a viable option for my mom and sister. Which is hard because my mom has lived in Yreka for thirty some-odd years. Adding another element to the equation, my two cousins on my mom’s side both live in Seattle, Washington, a place with good pediatric nephrology and a reasonable place for my mom to move to. There is also the consideration of a kidney transplant and me and my cousin’s being in Seattle would make it the easiest place to pursue this avenue. So I then began a quest to see if there was someway that I could get myself to Washington. Needless to say, the last year has been filled with more than a few days in which I contemplated whether I would be able to continue my career in academia. The choice was always easy, in that I knew that if I could help my mom and my sister, and that this meant that I couldn’t continue along my career in academia, well then helping was clearly the right decision. The combination of not considering leaving academia as being a failure was, however, fighting dearly with my desire to continue to do the work that I so love.
Fortune, however, finally decided to smile upon my family. The smile came in the form of Professor Mark Oskin and the University of Washington Computer Science and Engineering department. I’m happy to announce that I will be starting a position as a Research Scientist in the Computer Science and Engineering department at the University of Washington starting May 1st. I am very very excited about this move, not only for bringing some much needed relief to my family, but also because I really like the CS&E department at UW and was excited by the interest expressed towards quantum computing by those I talked to when I visited the campus a few months back. I’m beyond so far lucky to get this opportunity, it makes winning the lottery seem reasonable.
There is, of course, some sadness at having to leave the Santa Fe Institute so soon. When I moved here, I imagined that I would be spending two years in Santa Fe. The people at SFI are truely awesome and the diversity of such a small place is pretty astounding. Plus SFI has an attitude which says that one must push the boundaries in order to do really good science and that the big questions are what you should be working on, not the small routine science. But mostly I will miss the people from SFI-tea time listening to them discuss life, the universe, and everything will be dearly missed (not to mention the amazing number of snacks at the SFI tea time!)
And so, another chapter begins. Oh, and if anyone knows of a two bedroom two bath apartment within reasonable distance of UW, let me know!

Snarky Mode On

Robert Laughlin, Nobel prize winner for the theory behind the fractional quantum Hall effect, has a new book out, “A Different Universe.” The book is interesting, but it also has its problems. As you might guess from previous posts I’ve made, Professor Laughlin has a amazing view of quantum computers:

There is a great deal of interest lately in the quantum computer, a fundamentally new kind of computational hardware that would exploit the entanglement of the quantum wave function to perform calculations presently impossible with conventional computers. The most important of these is the generation of enormous primes numbers and the quick factorization of other enormous numbers. The impossibility of factoring a number that is the product of two large primes in reasonable time with conventional computers is the basis of modern cryptography. However, quantum computation has a terrible Achilles heel that becomes clear when one confronts the problem of reading out the answer: the effects that distinguish quantum computers from conventional ones also cause quantum indeterminism. Quantum-mechanical wave functions do indeed evolve deterministically, but the process of turning them into signals people can read generates errors. Computers that make mistakes are not very useful, so the design issue in quantum computation that counts is overcoming mistakes by measurement. A textbook method for doing this is to place a million copies of the same experiment in a small box and measure something they do collectively-generating oscillating magnetic fields, for example, as occurs in a quantum computer built with electron spins. The damage inflicted by the measurement process then affects only a few copies, leaving the rest intact. This trick is so powerful that variations of it enable you to read out the entire wave function of any quantum computer, at least in principle. However a logical implication is that you have created not a fabulous new kind of digital computer but a conventional analogue computer-a type of machine we do not use in the modern era because it is so easily disrupted by noise. Thus the frenzy over quantum computing misses the key point that the physical basis of computational reliability is emergent Newtonianness. One can imagine doing a computation without exploiting these principles, just as one can imagine proving by brute force that broken symmetry occurs, but a much more likely outcome is that eliminating computational mistakes will prove to be fundamentally impossible because its physical basis is absent. The view that this problem is trivial is a fantasy spun out of reductionist beliefs. Naturally, I hope I am wrong, and I wish those who invest in quantum computing the best of luck. I also ask that anyone anxious to invest in a bridge in lower Manhattan contact me right away, for there will be discounts for a limited time only.

Wow. Can I really read this and not respond? I just can’t resist. And especially I just can’t resist snarking. I should apologize for what I’m about to write, but my feeble mind just can’t take it. I just can’t take it anymore! So here are my suggestions for Laureate Laughlin:

1. Please read J. von Neumann’s, “Probabilistic Logics and the Synthesis of Reliable Organism from Unreliable Components.” (1956) The basis of computer reliability has absolutely nothing to do with “Newtonianness”. The basis of conventional computer reliability has to do with redudancy, and more physically with the thermodynamics of many condensed matter systems.
2. After you’ve mastered the most fundamental ideas of fault tolerance, it might be useful to understand the ideas behind error correction. Please read C. Shannon’s “A Mathematical Theory of Communication” (1948). Sure we are going backwards in time, but I think it’s important for you to realize that redundancy (“place a million copies”) is not the only way to encode information. Indeed this fact will become very important as we move on to step 3.
3. Now you’re ready for the big stuff. I know you know quantum theory like the back of your hand, so this step will be easier for you than for many others. Please read John Preskill’s “Fault-tolerant Quantum Computation.” See how the previous two ideas, when slightly modified for the quantum world, lead to a theory of fault-tolerant quantum computers. Isn’t that amazing? I consider it to be one of the most important results in physics in the last thirty years, but you’re older and wiser, so you may feel free to downgrade it. But please don’t desecrate what you haven’t had the time to understand. Quantum error correcting degrees of freedom are most distinctively not the simplistic collective degrees of freedom which you insinuate (“oscillating magnetic fields.”) The idea is more subtle, and thus, I believe more beautiful. While beauty is in the eye of the beholder, you must surely admit that your solution to the fractional quantum Hall effect is only beautiful when you have the background to understand the theory, and so too, quantum fault-tolerance is beautiful, but only once you’ve sat down and understood the theory.
Oh, and by the way, the generation of large prime numbers is easy, not hard, for conventional computers (but you did better than the string theorist Michio Kaku, who I once saw on T.V. claim that quantum computers were amazing because they could efficiently multiply large numbers.)

My Electron is a Black Hole

There are certain coincidences which, we you first hear about them, you sit up all night thinking wild and crazy thoughts. I think my favorite example of this comes from the Kerr-Newman black hole. The Kerr-Newman solution to the Einstein-Maxwell equations describes black holes with charge and angular momentum. What is strange about the Kerr-Newman black holes? Suppose we examine the gyromagnetic ratio for these objects. The gyromagnetic ratio is 2Mm/QJ where M is the mass of the black hole, m is the magnetic moment of the black hole, Q is the charge of the black hole and J is the angular momentum of the black hole. For a Kerr-Newman black hole (and for many other charged solutions in general relativity) the gyromagnetic ratio is exactly 2. Sound familiar? Well it should, because this is the value of the gyromagnetic ratio for a Dirac electron (there are some claims that this value of 2 is a triumph of relativistic quantum field theory, but it must be said that there are nonrelativistic arguments for a value of 2 as well.) Talk about a strange and wonderful coincidence. Or more than a coincidence? So now you can spend the rest of the day worrying about whether the electron is nothing more(!) than a charge spinning black hole!

Information in Economics

Yesterday I attended a talk by Ed Thorpe. Ed not only wrote the highly influential book “Beat the Dealer” where card counting for blackjack was first popularized but he also did work which anticipated the Black-Scholes model of options pricing (and in fact was trading using the basic formula before the formula was even written down) and has been part of one of the earliest hedge funds which had annualized returns of around 30 percent over nearly thirty years.
In his talk yesterday, Ed discussed how the efficient market hypothesis is wrong via numerous examples. The efficient market hypothesis asserts that stock prices are determined by a discounting process such that they equal the discounted value (present value) of expected future cash flows. Whenever you hear a talk about the efficient market hypoethsis (including Ed’s today) you always hear that part of the rational which leads towards this hypothesis is that prices reflect all known information. Now whenever I hear this, I feel like I want to crawl up into a ball and cry because I have a hard time connecting this statement with the information I am familiar with (Shannon et. al.) and also what it means to have “all” information. What in the world is “all” information? Do I need to know the wavefunction of everyone envolved? Sheesh. No wonder why traders like to give efficient market hypothesizers a bad time.

Wormholes

A wormhole is a topological feature of spacetime which essentially links two locales in the universe. Perhaps the most famous (and one of the first examples) considered by physicists is the so-called “Einstein-Rosen bridge.” This is a solution to the vacuum Einstein equations in which one pastes two Schwarzschild solutions together in such a way that there is a link between one part of the universe and another part of the universe. Here is a nice picture:
Most people, when they think about wormholes, think about science fiction stories where wormholes are used for traveling between distant locales. Go in one throat of the wormhole and come out on the other side of the universe. Sounds like fun!
Wormhole solutions in general relativity are, however, pretty nasty. In particular they require “negative energy densities.” There is also the interesting problem that wormholes lead to time travel. Consider creating a wormhole where the wormhole connects two neighboring regions. Now take one of the wormhole throats, on a trip over to Alpha Centari and back. Then there will be a twin paradox between the throats: one of the throats will be older than the other. Thus we can use the throat to travel back in time.
Today I was reading Einstein and Rosen’s original paper [Phys. Rev. 48, 73–77 (1935)]. What was interesting to me was not so much the solution but the reason Einstein and Rosen were interested in their solution.

In spite of its great success in various fields, the present theoretical physics is still far from being able to provide a unified foundation on which the theoretical treatment of all phenomena could be based. We have a general relativistic theory of macroscopic phenomena, which however hirtherto been unable to account for the atomic structure of matter and for quantum effects, and we have a quantum theory, which is able to account satisfactorily for a large number of atomic and quantum phenomena but which by its very nature is unsuited to the principle of relativity. Under these circumstances it does not seem superfluous to raise the question as to what extend the method of general relativity provides the possibility of accounting for atomic phenomena. It is to such a possibility that we wish to call attention in the present paper in spite of the fact that we are not yet able to decide whether this theory can account for quantum phenomena.

Here we see Einstein and Rosen proposing that quantum theory is a consequence of general relativity! In particular what Einstein and Rosen were most interested in for their solution was that one could have solutions to the vacuum Einstein equations which had no singularities but which had some resemblence to (two) particles with mass. In fact Einsten and Rosen even conjecture that the throats of the Einstein-Rosen bridge could be neutral particles like a neutron or a neutrino!
One of the most interesting questions to ponder is what would Einstein’s reaction have been to Bell inequality violations by quantum theory. John Bell was able to show that correlations produced between spacelike separated quantum systems cannot in general be explained by local degrees of freedom carried with these systems. Reading the Einstein-Rosen paper, in which nontrivial topology is introducted without blinking, I’m inclined to think that Einstein would have thought of Bell’s result not as invalidating “classical” reasoning about quantum theory, but instead as a validation of the point of view advocated in this paper: that quantum theory is a consequence of a topological extension of general relativity.

Time Machine

For sale on ebay, a time machine:
A Time Machine
The bidding, unfortunately, for this wonderful device is over. Well, uh, unless the time machine actually works in which case, words like “over” seem to have distinct problems.