…with high probability.
That joke was recycled, and in recent work, Bristol experimentalists have found a way to recycle photonic qubits to perform Shor’s algorithm using only $latex lceil log Nrceil + 1$ qubits. (Ok, so actually is was Kitaev who figured this out, as they are basically using Kitaev’s semi-classical QFT. But they found a way to do this with photons.) Further reductions on the number of qubits can be achieved by “compiling” to throw out the unused ones. In this case, we are left with a qubit and a qutrit. The feedback of the measurement is also replaced with the usual postselection. However, buried in the body of the paper is the main advance:
Our scheme therefore requires two consecutive photonic CNOT gates–something that has not previously been demonstrated–acting on qubit subspaces of the qutrit.
This is important progress, but until the factored numbers get a lot larger, the “size of number factored” yardstick is going to be a frustrating one. For example, how is it that factoring 143 on an NMR quantum computer takes only 4 qubits while factoring 15 takes 7?
Anyone have ideas for something more objective, like a quantum analogue of MIPS? (Incidentally, apparently MIPS is obsolete because of memory hierarchies.) Perhaps Grover’s algorithm would be good, as “compiling” is harder to justify, and noise is harder to survive.
Update:
In the comments, Tom Lawson points out the real significance of factoring 21. This is the smallest example of factoring in which the desired distribution is not uniformly random, thus making it possible to distinguish the successful execution of an experiment from the effects of noise.
Aram, thankyou for your post about our recent paper.
I am broadly in agreement with what your have written. However, I believe you have missed an important point. I agree that the “size of factored number” is not a useful yardstick for small scale demonstrations of factoring. But factoring 21 (with the co-prime 4) is interesting not because twenty-one is greater than fifteen, but because it is the first non-trivial example of factoring. One may define a trivial protocol as one that produces results which are indistinguishable from random noise.
We consider a system composed of n control qubits which factors the integer N, using the co-prime x. The algorithm is searching for the order, r, linking x and N according to x^r (mod N) =1. In practice such a machine would be run just a handful of times until a useful output is found. However, for a meaningful demonstration the experiment must be run many times so that the entire output distribution is seen. This distribution, which contains the essential properties of the protocol, takes the form of a series of peaks spaced at intervals of 1/r. Since the control qubits encode a binary representation of the ideal distribution the peaks are usually slightly smeared. But they can be made more precise – brought into focus, if you like – by increasing the number of control qubits, n. (In a practical implementation, where few iterations of the algorithm are needed, precision is the wrong term. Rather we can talk about the efficiency with which a useful result is found.)
A quirk of factoring small numbers means that a power of two order is often found,
r = 2^p,
for integer p. This is an example of a trivial computation. In this case the distribution can be perfectly represented in binary and the correct output is a uniform distribution of peaks (with no smearing) corresponding to all 2^p logical states of p control qubits. (The remaining n-p qubits encode trailing zeros in the distribution. In practice they remain unentangled throughout the computation.) This uniform distribution is identical to that of a malfunctioning circuit where arbitrary levels of white noise are mixed into each entangling gate. Hence, the output of a trivial protocol does not reflect the quality of the entanglement produced in the circuit.
Factoring fifteen with any choice of co-prime is an instance of a trivial algorithm. It is not until we consider twenty-one (N=21 with co-prime x=4 gives order r=3) that we find an algorithm with an entanglement-sensitive distribution. (Indeed, we go to some lengths to experimentally demonstrate that white noise destroys the characteristic distribution.) I believe factoring N=21 with x=4 is the simplest non-trivial example of Shor’s factoring algorithm.
You point out that increasing N is not necessarily interesting. I agree. Better to increase the precision (read: number of control qubits) in one particular non-trivial calculation. For instance, improving the algorithm for N=21 may be more challenging than factoring N=143. (Since the order r=3 cannot be perfectly represented in binary there is no limit to the number of control qubits whose behavior is entanglement-sensitive in factoring N=21 with x=4. Whereas a trivial algorithm exists for factoring N=143: it suffices to choose co-prime x=12 giving order r=2.)
I believe such experiments would exhibit the essential features of quantum computation. Furthermore, they have the advantage of being fallible; this is an experiment which can fail, and thus one whose results reflect a rigorous test of the experimental set-up.
For the time being – while demonstrations remain small – I propose that your yardstick be based on this idea: the maximum number of control qubits, n, used to (successfully) factor N=21 with co-prime x=4.