Last Thursday, at the QIP rump session in Beijing, John Smolin described recent work with Graeme Smith and Alex Vargo [SSV] showing that arbitrarily large numbers $latex N$ can be factored by using this constant-sized quantum circuit
to implement a compiled version of Shor’s algorithm. The key to SSV’s breathtaking improvement is to choose a base for exponentiation, $latex a$, such that the function $latex a^x bmod N$ is periodic with period 2. (This kind of simplification—using a base with a short period such as 2, 3, or 4—has in fact been used in all experimental demonstrations of Shor’s algorithm that we know of). SSV go on to show that an $latex a$ with period 2 exists for every product of distinct primes $latex N=pq$, and therefore that the circuit above can be used to factor any such number, however large. The problem, of course, is that in order to find a 2-periodic base $latex a$, one needs to know the factorization of $latex N$. After pointing this out, and pedantically complaining that any process requiring the answer to be known in advance ought not to be called compilation, the authors forge boldly on and note that their circuit can be simplified even further to a classical fair coin toss, giving a successful factorization whenever it is iterated sufficiently many times to obtain both a Head and a Tail among the outcomes (like having enough kids to have both a girl and a boy). Using a penny and two different US quarters, they successfully factor 15, RSA-768, and a 20,000-bit number of their own invention by this method, and announce plans for implementing the full circuit above on state-of-the art superconducting hardware. When I asked the authors of SSV what led them into this line of research, they said they noticed that the number of qubits used to do Shor demonstrations has been decreasing over time, even as the number being factored increased from 15 to 21, and they wanted to understand why. Alas news travels faster than understanding—there have already been inquiries as to whether SSV might provide a practical way of factoring without the difficulty and expense of building a large-scale quantum computer.
Brilliant!
The very first sentence of the Smolin/Smith/Vargo preprint arXiv:1301.7007 (which is wonderfully dead-pan!) asserts a Great Quantum Challenge:
Here a Great Quantum Challenge is (with a nod to Niels Bohr) a challenge whose dual also is a Great Quantum Challenge.
A reasonable statement of the dual Great Quantum Challenge is along the lines of
Here “practically capable” is understood to mean something like “requiring computational resources that are polynomial in the bit-length of a simulated dataset.”
For the present it is entirely unknowable — and thus reasonably debatable — which (if either) Great Quantum Challenge is likely be fulfilled.
It is evident, though, that throughout the past several decades, progress toward the second Great Quantum Challenge has been more uniformly rapid, more cumulative in capabilities demonstrated, and more transformational in practical benefits, than progress toward the first Great Quantum Challenge.