Over at masteroftheuniverse, the master has posted a great list of prop bets. Among his bets is one that probably won’t work on many computer scientists (or it shouldn’t if they’ve had even a decent theory course) based upon the birthday problem. Sometimes the birthday problem is called the birthday paradox, but the problem is no more a paradox than the twin paradox is about twins. The birthday problem has to do with the probability that a set of randomly drawn people share a birthday. In other words, assuming that everyone in a group of N people has an equal probability of being born on a given day, what is the probability that at least two of these people share a birthday. Quite surprisingly, or at least surprising the first time you hear it, is that if N is 23, this probability is already greater than 50 percent. In computer science this type of process comes up all the time and is responsible for lots of square roots that one sees in running times of algorithms. The master’s blog post reminded me of a version of the birthday paradox that Wim van Dam once told me (if anyone knows its past history, please post a comment)…a quantum birthday paradox.
Here is the setup. Suppose that we are sampling from the set . In particular consider the situation, classically, where we are sampling from two distributions over this set, and . Both and are distributions which are on exactly N of the elements of and 0 on the rest of . I will guarantee you that either the distributions are equal, , or that when has weight on an element, then has weight 0. In other words, the probability vectors for these distributions are orthogonal, so I will denote this . So the problem is, given the ability to classically sample from these distributions, how many samples must one take to succeed in identifying which of these two cases, or , hold. One can easily see that this probability is like the birthday problem: by sampling from and one has to basically wait for a collision in order to determine that . Thus you can see that it would require about samples to distinguish these two cases. More specifically, to distinguish between these two cases with some constant probability, say a probability of 3/4, we need to sample times.
Okay so what does this have to do with a quantum birthday paradox. Well now consider the situation where instead being given two probability distributions and , one is given two quantum states, and , with the property that if you simply measure them in the computational basis you will obtain the classical distribution that behave like and . That is let and be superpositions over 2N computational basis states with the property that in this basis, they have exactly N amplitudes which are and N amplitudes which are zero. We are guaranteed that either or and the goal is, by using many copies of and to distinguish between these two cases. Now if one simply measures these states in the computational basis then one obtains probability distributions that are exactly like and . But this is the quantum world, so we don’t have to measure in this basis. So is there a basis that we can measure in which can lead to using less that copies of and to distinguish the two cases of versus ?
The answer is yes, indeed. Of course that’s the answer: why else would I be writing this blog post. In particular consider the fully symmetric or anti-symmetric subspaces of the two systems. In particular, define the states
if
and
if
These states form a complete basis for the two systems we are considering, with the states representing symmetric states and the states representing the anti-symmetric states. Suppose that on and we measure the above states. Now if , then we note that is symmetric under exchange of the two states subsystem. That is if we measure the above basis states we will only obtain basis states. If, on the other hand then it is straightforward to see that has support equally on symmetric and anti-symmetric states. In particular if
and
where , then we can expand as
“Well, besides showing a cool case where quantum exponentially outperforms classical,…”
Exponentially?
Oops, uh “N versus constant”-ly outperforms classical
Dave, it was interesting that immediately to the right of your post was a ScienceBlogs.Com link to Practically useful: what the Rosetta protein modeling suite can do for you … which is like your post in being also about the efficient extraction of useful information from the smallest feasible number of measurements, and with the least feasible computational effort.
These two topics are IMHO equally interesting, and possibly convey a shared set of lessons-learned about fundamental physics. One key difference is that your problem’s quantum states (by assumption) are not thermally equilibriated, while Rosetta’s conformations (by assumption) are.
It’s clear that this thermal equilibriation makes all the difference in the world, regarding the feasibility of efficient measurement, simulation, and estimation.
Since we know that our universe was born out of a thermal equilibriation process (the Big Bang), perhaps it’s not surprising that (in the physical universe) we never encounter quantum dynamical systems than can’t be simulated efficiently.
Indeed, we begin to wonder whether it might be infeasible even in principle to physically realize impossible-to-simulate quantum systems … in part because the quantum state-space that was born in the Big Bang is not a linear Hilbert space, any more than the spatial geometry of the universe—that was born in the same Big Bang—is Euclidean.
I’m not clear whether constructing the classical and the quantum tests with sufficient accuracy to make the respective tests work well enough for practical purposes require comparable time and resources.
If it takes a unit A of time and resources to construct an apparatus that classically compares one element from P0 with one element from P1, and a unit B of time and resources to perform each query instance, it is not clear that it takes time and resources of order A to construct an apparatus that projects reliably to the symmetric and anti-symmetric basis sets and that it takes time and resources of order B to perform each query instance.
In the absence of a realistic estimate of the relative costs of implementation, I can’t see what significance the sqrt{n} comparison has.
> Exponentially?
Right, not exponential but like Grover it’s an N1/2 improvement.
Peter: the resources needed for the swap test are certainly tractable (swap test is certainly doable…google “swap test quantum” for an explanation of the circuit.) The objection you could raise is that this an oracle result, and the oracles I’m comparing are certainly not equivalent. Fine. But if you are given a quantum oracle the fact that you can solve this problem in constant time versus N^{1/2} if you use the oracle classically should give you a double take.
Um, Not Exponential, no. Classically it’s N^{1/2}. Quantum it’s constant. Normally we would measure the problem size in bits to express N, i.e. this an exponential to constant improvement, i.e. MUCH MUCH better than a polynomial Grover speedup.