Depth or Breadth?

Which would you rather have, breadth or depth? Suppose I give you the choice between the following two directions in experimental research in quantum computing in the next five years: either a few (say ten to thirty) qubits with extremely low decoherence and high control precision (perhaps even a high enough to perform quantum error correction) or a huge number of qubits (say hundreds to thousands to millions) with fairly poor decoherence and control. Assume that in both cases, fabrication can be done with a fairly high degree of sophistication. Of of these two options, which would you perfer to see in the near future?

17 Replies to “Depth or Breadth?”

  1. I know this is a cop out, but it depends. Can we still do truly quantum computations on the large noisy computer? If so, then it’s a no brainer. By truly quantum, I mean able to harness a quantum speed up. Can you imagine the implications for condesed matter physics alone if we could do simulations of a 250×250 lattice of qubits?

  2. It seems a bit pompous to me to plan out the future of experimental work in this area. Presumably good work requires new ideas. People should attempt whatever good new thing is the most likely to actually work.
    Either of your options would be massive progress in the field, as I understand it.

  3. I call shenanigans. Why? Because you haven’t posed the question precisely enough. Within your conditions, we have both already:
    1) NMR is up to 12 qubits, IIRC. Control and decoherence are good enough to do error correction. It’s a perfectly satisfactory technology — except that it’s not scalable (there’s nothing wrong with pseudopure states unless you want scalability).
    2) Any old chunk of matter satisfies your second criterion — I’ve got 10^26 qubits with extremely poor decoherence and control properties.
    And yes, of COURSE I’m being disingenuous; I’ve reductioed ad absurdum. Before I can vote, I want to know two things:
    1) In case (a), is the technology scalable in principle? Or an obvious dead end?
    2) In case (b), how bad is the infidelity?
    The Dude has the right idea — quantum computers is as quantum computers does. Can we do something useful with it? And, if not, how much work will it take to do something useful with it?
    -The Crotchety Positivist

  4. The question is purposely vague, because, well mystery is the way to look really sexy, right? 😉
    But seriously (yeah, right, like I can be serious), I don’t think pseudopure states of the 12 qubit NMR states satisfy the requirement of being below the threshold for error correction, right?
    But to clarify, let’s say the error rates for the large number of qubit case was 0.13*10^(-2) and the smaller system is scalable, in a physicists definition of scalable, i.e. no real reason we can’t scale it, but not scalable from the perspective of someone who spent their life working with classical silicon technologies.
    I love the taste of shenanigans in the morning.
    Oh, and this post may be related to the last post 😉 Oh, and I prefer tall and skinny unless I’m riding on the Seattle buses, cus tall people don’t fit in their seats.

  5. An interesting combination would be poor coherence times, but good control. You might be able to do some graph state stuff then, since then 2 qubit gates usually take a lot longer than single qubit gates, and you can get them all out of the way in 1 timestep.

  6. I think it’s an excellent question, because it probes an important underlying issue. Are QCs with less-than-perfect precision worthwhile computing machines? Of course, their study will teach us some interesting decoherence physics, but are they worthwhile COMPUTING machines? I think they would be worthwhile computers if there existed interesting algorithms that did not require perfect precision.
    I think Shor’s algorithm requires perfect precision, because it searches for a unique set of primes within an immense sea of possibilities. Other algorithms are allowed more slack in the final answer; if the answer is approximate, it is still useful. An approximate factorization is useless. There is no such thing.
    I suspect D-wave Systems would say that simulation of molecular systems does not require perfect precision, but I have my doubts about that . A not-very-precise molecular simulation obtained with a QC might be no better than a molecular simulation obtained using approximations and a classical computer.
    Does the threshold for error correction depend on how much precision is required in the final answer?

  7. Shor’s algorithm does not require perfect precision; if it did, the whole field would be toast. Error correction gets you around this problem.
    We do not know what the fundamental threshold for fault tolerant quantum computing is. Once we know that, we can answer Dave’s question. But right now I would place my bet on many qubits with relatively poor control. Nature tends to be generous.

  8. Not true: Shor’s algorithm gives the right answer only with high probability. The point is that it is easy to check the answer and run the computation again in case it failed.
    Also, I don’t know what you mean by “approximate factorisation”. I don’t think there is such a thing in number theory.

  9. rrtucci, I think you should define clearly what you mean.
    If you believe what I wrote is factually incorrect, you are welcome to point out any errors. But there is no need to be rude.

  10. If it’s not clear, by “perfect precision” I meant “perfect precision in the final answer”. Shor’s algorithm requires that. It doesn’t find an approximate factorization, it finds an exact one.

  11. There are a number of reasons why “perfect precision” is not needed:
    1) Measurements are NOT state tomography. Your measurement collapses the state onto one of 2^n possible states (where n is the number of qubits). These are discrete, so there is no continuum of possible states, and so finite errors can be tolerated.
    2) The measurement will collapse all errors onto bitflips. If the probability of an error is small, then so is the probability of the corresponding bitflip.
    3)It’s also worth pointing out, that if you were to have up to M bit flip errors in the final answer, this would lead to O(log(N)^m) possible factors. If m is small, say 2 or 3, then this is checkable on a classical computer, since you could simply do trial division by this subset of numbers.
    4) If you make a complete mess of doing the calculation, i.e. you spill coffee on the qubits ;-), you can always try again.

  12. Speaking as engineers, we would put a high priority on large-scale quantum emulation capability. From an engineering point of view, a vital role of experiments is to validate system-level emulations.
    Almost all system-level technologies these days are developed by a central strategy of large-scale emulation:
    http://www.rttc.army.mil/whatwedo/primary_ser/modeling.htm

    Modeling, simulation, and hardware/human-in-the-loop technology, when appropriately integrated and sequenced with testing, provide each customer the best tests, at a reasonable cost, within their time schedule. This “advanced testing” process specifically 1) reduces the cost of life-cycle testing, 2) provides significantly more engineering/performance insights into each system evaluated, and 3) reduces test time and lowers program risk.

  13. Chosen Greece Candidate Backs Out
    The candidate who was the board of education’s choice as the next superintendent of the Greece Central School District navision withdrawn his name. The candidate, who was not identified, announced his decision navision morning, citing personal reasons.

Leave a Reply

Your email address will not be published. Required fields are marked *