Measurement-Based Conference

The fact that you can perform unitary quantum evolutions using simple (adaptive) measurements is, to a physicist, an unexpected result. Indeed, it could be that there is no unitary evolution in the universe, only measurements! If you’re interested in measurement based quantum computing, you might be interested in conference advertised below:

International Workshop on Measurement-based Quantum Computation (MBQC07)
St. John’s College, Oxford
18 – 21 March 2007
http://www.qunat.org/workshop/
Measurement-based quantum computing (MBQC) is an active and rapidly growing area of research. The formalism of graph states (or cluster states) has proven to be a powerful way of describing the essential entanglement resources needed to perform quantum information processing tasks. Initially conceived for systems such as optical lattices and linear optical computing, this theory is now shaping the latest experimental proposals across the full spectrum of QIP technologies. A key theme of this workshop will be to foster dialog between theoreticians involved in MBQC and the experimentalists who are positioned to embrace and implement the new ideas.
Registration is open until November 30th and the number of participants will be limited to 50.

12 Replies to “Measurement-Based Conference”

  1. I don’t know much about this approach. It seems that mostly Europeans work on it. Are there any American experimental groups working on it? What are the pros and cons of this approach compared to competing approaches?

  2. The fact that you can perform unitary quantum evolutions using simple (adaptive) measurements is, to a physicist, an unexpected result.
    The really surprising thing to me is the concept of teleportation; the fact that adaptive measurements can implement any unitary evolution must have been understood back in 1993 when the teleportation paper appeared. Note also that Schrodinger almost figured this out when he was writing about “quantum steering”.

  3. Indeed, it could be that there is no unitary evolution in the universe, only measurements!
    Only if they’re adaptive measurements:-). Maybe we need God to fulfil that role.
    It seems that mostly Europeans work on it.
    Well, there a few notable non-Europeans who have dabbled in 1WQC, like Nielsen and Leung.
    What are the pros and cons of this approach compared to competing approaches?
    You’d have to go through each physical implementation to really asses the pros and cons. There are many proposals including optical lattices, NV centres in diamond, linear optical, cavity QED, and even superconducting qubits.
    Pros:
    It’s a neat idea.
    You can use probabilistic entangling processes and it’s still scalable (modulo assumptions).
    Lets you do computation in systems where you don’t have a direct two-qubit individual controllable interaction (e.g. optical lattices, NV centres in diamond)
    Cons: Still only equivalent to the circuit based model (arguably a con)
    Probabilistic entanglement generation may incur huge overhead in prior resources required to generate a graph state, even if it’s created “on demand”. Factor in error correction and fault-tolerance and your computer is going to be huge, even compared to conventional QC
    Photon cluster-state computation requires fast feed-forward and classical computation. Question of losses as well
    There’s some work on error thresholds and fault-tolerant operation of 1WQC but whether it’s any more practical than conventional QC is still to be established.

  4. I’d add another con which is that measurements are, in almost all implementations, slower than one and two qubit gates (yah got’s to amplify your result.) I’d also add another pro which is that it leads to some cool complexity results quant-ph/0205133.

  5. Excellent answer Daniel (last name?). Excellent points Dave. (Wow, I respect a paper that has 6 revisions. Each one closer to perfection)

  6. I’d add another con in that if the initial state isn’t a stabilizer then things get a bit wacky.
    The con that Dave mentioned is a very large one. Though there is an equivalent in the standard model (well, for a lot of implementations) in that single qubit gates (at least in one direction) are likely to be slow.

  7. The con that Dave mentioned is a very large one. Though there is an equivalent in the standard model (well, for a lot of implementations) in that single qubit gates (at least in one direction) are likely to be slow.
    I thought the main problem for many implementations of the circuit-based model was slow measurement compared with gate time, especially if you want to do ancilla verification, error syndrome extraction and immediate recovery operations. DiVincenzo and Aliferis have a nice paper (quant-ph/0607047) which gets around this by using some nice tricks which keep the error threshold the same as with fast measurement.
    Unfortunately, I don’t see how to apply the same tricks to 1WQC.

  8. Daniel – that’s a problem too, that’s always a problem. Maybe my use of the term “equivalent” was a bit dumb. I meant “equivalent” in the sense that there is a bottleneck in most implementations of the standard model like there is in the measurement-based models. When you look at single qubit operations in many architectures there is one axis of the Bloch sphere about which a single-qubit rotation is relatively slow.

  9. I’d say the Clifford gate pro Joe mentioned is a pretty huge pro, when one considers that error correction is what is done most of the time, and error correction consists of only Clifford gates (when using stabilizer codes anyways, even if it is with subsystem encoding).

  10. Hmmm a lot of really interesting points have been raised here.
    With regard to gate time vs. measurement time, it’s not clear to me that the measurement time is necessarily longer in all implementations.
    With some solid state candidates (e.g. electron spins in electrically gated quantum dots) one could infer from the current state of the art experiments that measurement (e.g. with a single
    electron transistor or point contact) will take much longer than unitary operations (implemented e.g. via the exchange interaction). But it’s not always true that measurement takes a long time. For instance, the photon counters developed for quantum crypto can easily do measurements in less than a nanosecond. In current ion trap experiments, the readout time (using state-dependent resonance-fluorescence) is roughly the same as the 2-qubit gate time, and there’s every reason to believe that this can be improved
    (e.g. by upping the photon collection efficiency). The ultimate speed limit for this type of optical measurement will be the spontaneous emission time, which varies from system to system, but is maybe ~10^(-8) s (quantum dots, impurities in diamond) to
    ~10^(-6) s (trapped ions). Even shorter times are possible by putting the emitter in an optical cavity (via the Purcell effect).
    Another point to note is that in the 1WQC, many of the measurements can be done in parallel, so it might be possible to get away with relatively slow measurements. Only the measurements that implement the non-clifford part of the circuit need to be done sequentially (and then only at the “logical” level – at the physical level each
    of these may be implemented in parallel, depending on the fault tolerance scheme used).
    Finally, as I see it, one of the main reasons for proposing schemes like Phys. Rev. A 71, 060310 (which make heavy use of measurements
    to implement the 1WQC) is that they overcome some big headaches in more conventional implementations based on the gate model (such as
    spins in dots). In particular, its kind of hard to address individual qubits when you place them so close together. By going to a more distributed architecture, you might overcome this problem,
    because the qubits could be meters away from each other, or even in different labs. This means it might be easier to get much higher fidelity operations an the physical level, even if the price to pay is an increased overhead due to the non-deterministic nature of the two-qubit operations. This feeds into the point that Daniel makes above – potentially, by working harder to engineer high-fidelity operations at the physical qubit level, the overall overhead (including error correction) could be much lower. This is because the overheads associated with fault tolerance drop dramatically when the errors at the physical level are significantly below the
    threshold.
    It’s very hard to make concrete statements about this however, because we still don’t know (i) what the final hardware for a QC will actually be, and (ii) to what extent the fault tolerant schemes can be improved.

Leave a Reply

Your email address will not be published. Required fields are marked *