Over hyped press releases are a standard for quantum computing research and a stable of what makes me sound like a grumpy old man. Really I’m not that grumpy (really! reall!), but I always forget to post the stuff which isn’t over hyped. For example, today I stumbled upon an article about a recent experimental implementation of a code for overcoming qubit loss done in China. In this article I find a graduate student whose was able to get a reasonable quote into the article:

While optimistic critics are acclaiming the newly achieved progress, the team, however, is cautiously calm. “There are still a lot to do before we can build a practically workable quantum computer. Qubit loss is not the only problem for QC; other types of decoherence are to be overcome,” remarks LU Chaoyang, a PhD student with the team. “But good news is, the loss-tolerant quantum codes demonstrated in our work can be further concatenated with other quantum error correction codes or decoherence-free space to tackle multiple decoherence, and may become a useful part for future implementations of quantum algorithms.”

Ah, that makes me happy.

Have I told the story about the best evidence for quantum computers being how good Gauss was at understanding prime numbers?

Overcoming decoherence? Easy! Just build your computer out of neuron microtubules!

Remember, a quantum computer is not a truck. It’s a series of tubes!

(Thank you, thank you, I’ll be here all week. If you like my act, be sure to buy my spoken-word album

They Laughed At Me At The Instituteon your way out the lobby.)Yes, that is refreshing, unlike this.

“Quantum computing refers to the use of atoms to compute. It requires harnessing the highly unusual and eccentric properties of atoms to create infinite computing capacity. Trying to picture the impact quantum computing might have on our daily life is a little like wondering, in 1945, how computers would change the world. Imagine the power of a computer with 10 million times the capacity of current computers. The age of the universe could be calculated in seconds, top-secret codes cracked immediately and web downloads could happen almost instantly. Think of any process or calculation your computer does now, then imagine it occurring 10 million times faster. And that’s just the beginning.”

It will make us into Gods!

I can’t wait to bring the power of Quantum Computing down onto my ex’s head!

BWA-HA-HA-HA-HA!

When will Wal-Mart have them?

I have another suggestion as to how to avoid qubit loss. Use superconducting qubits.

Isn’t there a morally equivalent effect for some superconducting qubit implementations in which you excite out of the two levels you use for your qubit?

For what it’s worth, it seems the referenced work is described in arxiv:0804.2268.

I googled Ashley Stephens’ source for the quote above. Here it is: http://www.austmus.gov.au/eureka/go/eureka-prize/leadership-in-science

Dave: No, I don’t think so. While you can bias a qubit enough to involve higher levels, you can’t get out of the two levels that define the readout basis for any qubit. In fact adding these extra levels can vastly speed up escape from metastable minima and are computationally interesting in their own right.

If you can detect excitation out of the two levels of your qubit you can treat this error as an erasure.

This concept of “excitation out of the computation basis” is new to me, and it is very interesting from a practical point of view, because (under another name) it is

thecentral problem in quantum simulation.Generally speaking, quantum simulations work on a low-dimensional state-space manifold that is immersed in a larger-dimension “true” Hilbert space. And this low-dimension manifold will typically be either a linear space, or else an algebraic Kahler manifold (which is curved, but tractably so).

In either case, at each step of a quantum simulation there will be a (hopefully small) amplitude for the simulated trajectory to project off the state-space (which is what I take to be the mathematical meaning of qubit loss). Here “hopefully small” means, from a practical point of view, that if this amplitude is unacceptably large, then we either have to enlarge our state-space, or else choose an alternative unravelling that does a better job of compressing onto our state-space … the latter being by far the preferred option.

Or else, we can simply accept the inaccuracy resulting from qubit loss … which is the simulation-theory equivalent of the LaTeX command “smash”. ðŸ™‚

That’s how a simulation specialist might think about qubit loss, anyway.

The informatic point of view is less familiar to me. My (vague) intuition is that perhaps “qubit loss” could be simulated by including “hidden gates” and “hidden qubits” in the circuit … with the idea being that the computational processes should be robust even when the hidden gates are adversarily chosen.

Pointers to further articles on “qubit loss” would be very welcome! It is great fun that there are so many different ways of approaching this “elephant” that we call quantum information theory!

Geordie/Dave: It depends. I haven’t looked to see what these guys are trying to protect against, but I’m pretty sure that exciting out of the qubit space is an issue in some superconducting qubits. See, for example, figures 6 and 7 in arxiv:0806.0383.

Is this really what the paper in question is protecting against? It seems a little odd to call excitation out of the computation basis “qubit loss”, since the qubits can come back if they decay back down.

Thanks Sean, for both the discussion *and* the references. And thanks to Dave too, for a series of posts that are (painfully) enlarging my thinking along the following lines.

Let’s imagine that quantum computation (of BQP problems) and quantum simulation (with polynomial resources) are the opposite sides of a hard-to-open door.

Then for every mathematical and physical principle on

the quantum computation side of the door, we can seek a dual mathematical and physical principle on thequantum simulation side of the door.The QIT community has demonstrated the power of ancilla bits and error-correcting feedback on the computation side of the door. So we can ask, what are the dual mechanisms on the quantum simulation side of the door?

Dave Bacon’s posts of the past few weeks have helped me appreciate that the answer probably lies in entangled unraveling. Perhaps we cannot find in our state-space basis vectors that allow a compressive unravelling of the Hamiltonian dynamics. But by introducing (fictitious) qubits and performing entangling measurements upon them, we create new opportunities for achieving the desired compressive unravelling.

Indeed, if I am not mistaken (uhhhh … it’s distinctly possible that I *am* mistaken ðŸ™‚ ) perhaps the DDMRG folks may already be doing this, without (as yet) describing their methods in a form that the QIT community can readily assimilate?

Anyway, my thanks to all, and especially to Dave, for a wonderful and provocative series of posts! ðŸ™‚

Sean: No this isn’t correct. If you transition into an excited state these states decay quickly into the lowest state in its well, which is non-adiabatic, but this doesn’t matter. Both the excited state and the lowest state in the same well correspond to the same state in the readout basis. Transitioning into an excited state does not take you out of the two-state readout basis and is definitely not a leakage error. Note that I’m specifically talking here about rf-squid flux qubits where these energy levels are the ones that form in the two-well potential relevant for low-energy dynamics.

Geordie: If you can bias a qubit to couple to higher-lying levels, you can also get non-adiabatic transitions into those levels (“leakage errors”) – and there will be other mechanisms that lead to these errors, too. Fortunately, it seems that loss/leakage errors should be easier to handle than logical errors.

John: This paper has an elegant argument as to why the capacity of a qubit erasure channel vanishes for a loss rate of more than 1/2

http://arxiv.org/abs/quant-ph/9701015

(below equation 2)

One of my favourite papers of the last few years is this one by Varnava, Browne and Rudolph:

http://arxiv.org/abs/quant-ph/0507036

There they show that you can *compute* for loss rates approaching 1/2. The interesting thing about their scheme is it seemingly saturates the fundamental limit placed by no-cloning. If their scheme was any better, one could violate no-cloning (for the same argument as given in the previous paper).

Hi Geordie,

Thanks for the clarification! The process you describe sounds like it would at the very least cause a phase error, rather than a leakage/erasure error. This is a bit of a shame, because (as the preceding discussion should indicate), the expectation is that leakage errors are rather easier to correct than bit- or phase-flip errors (in the sense that the corresponding fault tolerance thresholds are much higher for erasure errors).

Here “hopefully small” means, from a practical point of view, that if this amplitude is unacceptably large, then we either have to enlarge our state-space, or else choose an alternative unravelling that does a better job of compressing onto our state-space .

QC Business Plan.

(1) decide to “enlarge our state-space”

(2) find someplace good between where we are and Von Neumann Algebra/Hilbert Space-land.

(3) Profit!