I’ve never live blogged before (well I’ve been alive while I’ve blogged, but that is different, I guess), but maybe it will make me pay more attention to the talks, so here goes nothing. Oh, and happy Hallmark(TM) Valentines day! I’ll be updating these posts as the conference goes along.
John Martinis led a tutorial on measuring coherence in one qubit (Rabi flopping, Ramnsey interference, lifetimes, spin echo), measuring fidelities of one qubit gates, and process tomography for one and two qubit systems. All illustrated with beautiful experimental data from his group at UCSB.
Most interesting was the state that his group is in in creating one and two qubit gates. They can perform quantum process tomography on these processes, but what Martinis wanted to know was how can he get “what he should be doing next” from the process tomography results. Now of course a process tomography matrix has a lot of information in it. But how do you usefully extract information about what processes are occurring in your system. Since there are multiple possible error processes that could be occurring in your system, can you do anything better than just modeling and comparing? Are there sneaky ways to take process tomography data and extract out what you should be doing to improve the process fidelity?
Martinis also told us that digital logic is easier to explain that analog circuit logic because, well, physics is hard! He illustrated this by telling us that it was much easier to explain the former to his high school age son. This made me wonder if quantum computing logic will be easier to explain than quantum theory proper. Certainly I like to claim this is true, but has anyone explained quantum circuits to their high school age daughter or son?
Some numbers for the UCSB qubits, a T1 of 400 ns and a T2 of 100ns. Another interesting remark: the leakage into an extra state in their system was limited to less than 0.0001 (per gate, I think.) This number (which is just for this process, not for the true gate fidelities which are more like 98 percent fidelity) is less than more common quoted values for the threshold. Martinis said something to the effect for results like this, you just tell a graduate student: don’t stop until you get to 0.0001. I think all experimental profs should use this strategy. No Ph.D. for you until you get below the threshold! 🙂
Andrew Landahl gave a tutorial on channel adapted quantum error correction. Subtitled: A Quantum Love Story. A quantum love story between Alice and Bob, of course. Set 100 years into the future. This talk involved perhaps the greatest concentration of puns that powerpoint will allow. Convex became “Khan-vex” with Kahn, being, well you know. A “state representative” which was not a quantum state, but Bill Richardson. The talk was wonderful, but a bit hard to describe on a blog. Let’s just say there was a lot of work to get to a semidefinite program! Oh, and the talk ended with a marriage….in the church of the larger Hilbert space.
Dave,
Your comment on John’s talk is such a good one that I simply can’t avoid the urge to self-promotion. 🙂 The question “How do you usefully extract information about what processes are occurring in your system… can you do anything better than just modeling and comparing?” is exactly what inspired arxiv/0705.4282.
It’s only one of many ways to pose and answer this question… but we built an algorithm to identify all the [unitarily] noiseless subsystems of a tomographed process. The idea is that, since process tomography contains a lot of information, we’d like to abstract out simpler statistics like “What virtual qubit is best preserved? and how well is it preserved? and what’s its unitary drift over time?”
“Oh, and the talk ended with a marriage….in the church of the larger Hilbert space.”
Hmm, I think the CLHS only conducts polygamous marriages involving 3 people: Alice, Bob and Rachel the reference system. For a genuine two-person marriage they had better go to the CSHS: http://mattleifer.wordpress.com/2006/04/13/the-church-of-the-smaller-hilbert-space/