SquinT Live Blogging – Friday Talks

Live blogging from day one of the talks at SquInT 2008. Updated as the day goes along. So hit that refresh button 🙂

In a sign that history may be warping itself into a cirlce, the first speaker of the day was Serge Haroche, who was the first speaker at the first SqUinT conference ten years ago. Close time like curves anyone?
Haroche talked about quantum nondemolition measurements of a photon number in a cavity (see 0707.3880.) A quantum nondemolition measurement is a measurement of eigenstates which commutes with the free evolution of the quantum system (thus only external interactions will result in repeated measurements yielding different outcomes.) Here Haroche is doing a quantum nondemolition measurement of photon number, not just a measurement of single photon versus no photon. Traditionally nondemolition measurements of photons is difficult because lots of mechanisms for measuring photons destroy the photons. However, by using atoms coupled strongly to the photon in the cavity and then measurement of the state of the atom, a quantum nondemolition measurement can be made which doesn’t destroy the photon. Roughly one can think about this interaction as a controlled-not on the atom, conditional on the photon number modulo (say) eight (this could be changed.) By clever use of repeated measurements (which can be done because the photon is still in the cavity when new atoms come along ) of photon number modulo eight one can obtain information about the photon number. Very cool.
An amazing number from Haroche’s talk: the cavities his group is using have such a high Q that a photon bounces on average between 1.3 billion times before decaying out of the cavity (see quant-ph/0612138).
Haroche also talked about generating Schrodinger cat states of photons (superpositions of coherent states with different phases) (paper soon, it looks like) and briefly mentioned two cavity experiments which they are setting up to perform probes of nonlocal quantum effects.
The next speaker was Thomas Gerrits who talked about the generation of optical cat states (since so much of the group is here at SqUint, Thomas showed a picture of their mascot, a stuffed cat named Erwin, who was left to guard the experiment.) In contrast to the previous talk the optical cat states talked about here will be generated by using squezed states of light (made using down converison.) The experiment isn’t quite generating cat states yet (not enough squeezing, for example), but instead is generating something which Thomas called a “rabbit state” (previous groups had generated “kitten states”, not quite high fidelity cat states.)
Steve Flammia’s talk was on the generation of continuous variable cluster states (see quant-ph/0703096.) One caveat Steve mentioned about continuous variable cluster states was that there is no theory of fault-tolerance for these systems. Which made me remember a question I’d had a while ago, which I was too lazy to look up, which was what is the state of full fault-tolerance for continuous variable quantum computing. Of course “continuous” leads to certain difficulties for definitions of fault-tolerance, but it seems that maybe “finite-precision-continous” models would be accessible to a fault-tolerant theory. Anyone (you know who you are) care to comment and dispell my ignorance? Does the paper of Gottesman, Kitaev, and Preskill fully address this? Remember I”m live blogging so I can’t look this stuff up and reread the paper 🙂 (Well that’s my excuse.)
Steve’s talk focused on the creation of continuous variable cluster states using a single multimode optical parametric oscillator. Steve showed methods to create a continuous variable cluster state on a (crazy?) toric lattice. The nice thing about this scheme was that it used one cavity, in one step, using a constant number of probes.
When Barry Sanders asked Steve about the limits of this scheme for real world systems where one needs to worry about being careful with the unbounded nature of “continuous variable” operators, Steve quoted Donald Rumsfeld, by referring to that as an “unknown unknown.” Classic.
The next talk was by Nathan Lundblad (whom I’ve had the pleasure of taking money from while playing poker when he was a poor graduate student and I was a rich (yeah right) postdoc) who talked ulltracold atoms in a radiofrequency-dressed optical lattice. The group Nathan works with creates a Rubidium 87 Bose Einstein condensate and then loads this into an optical lattice. Optical lattices are very cool. Roughly you take laser beams shine then at each other and create a periodic potential well (think of an egg crate for the two dimensional case) where you can trap neutral atoms.
Now if I understood Nathan’s talk correctly (and there is a great chance that I didn’t!), normally when you create the potentials (by shining lasers at each other) you end up with potentials which are naturally periodic of a size roughly a half wavelength of the light being used. But what if you want to engineer lattices that have closer atoms, i.e. structure on a subwavelength scale? You might want to do this, for example, for engineering more interesting condensed-matter-like simulations in optical lattices. Nathan showed how he is doing this using radiofrequency dressing of a strongly state-dependent bare lattice (state dependent lattices mean that the atoms in different states see different potentials.) Interestingly he seems to be trying to engineer ring like structures for the potential at each lattice side. Hmm, that sounds like a nice set of tools for those trying to engineer crazy Hamiltonians on lattices 🙂
Lunch time!
Alex Kuzmich had the after lunch talk spot. Always dreaded because of food coma effects 🙂 Alex discussed using nuclear states to build a clock. Nuclear energy level are usually separated by a large energy, typically of the order of MeV. Turns out, however, that 229Th has a low energy excited state: somewhere around 7.5 eV. This lower energy means these states should be accesible to excitation with lasers. Since these nuclear states are influenced less by background electromagnetic fields, it has been suggested that these states could be used for a very stable clock. Alex preliminary discussed efforts at Georgia Tech (the Rambling Wreck) to trap ionized 229Th, in particular Th3+.
No sooner did I write that last sentence then Alex started talking about quantum repeaters using atomic ensembles. Doh, I now have great sympathy for sports journalists who have to generate stories immediately upon the completion of the sportin event. So the second half of Alex’s talk was focused on atomic ensembles used for quantum information processing. Importantly they have made a quantum repeater which works in the telecom band (although it seems there are still some fidelity improvements which are necessary for this to be used in, say, real world quantum key distribution?)
Next up (I’ve now got Diet Coke in me) was Patrick Hayden who talked about his joint work with John Preskill on black holes (see 0708.4025.) Okay trying to explain this better than the paper linked above or than in Patrick’s is a talk I’m certainly not up to. The basic jist is that thinking about the thermalizing properties of a black hole as a random unitary or a random small quantum circuit, just seems to save black hole complementary. Just. I highly recommend section two of that paper which provides a very readable classical model. On a note related to the talk, Patrick told me that Leonard Suskind was originally a plummer. Holy moly, is that true?
The next talk was by Gregory Crosswhite. If I blog about Gregory’s talk then I would be violating the graduate student / advisor confidentiality. So I will resist. But I will say that Gregory’s talk was programmed using over 3000 lines of Python code. That’s right his talk wasn’t in Powerpoint but in Python.
Okay I had to run away and missed the next talk and a half (which was too bad because Jon Walgate at one SquINt gave what I consider to be the best talk I’ve ever seen. It involved Pirates.) because I wanted to show someone my old haunt, the Santa Fe Institute. I miss that place. Why? Because to me, at least, it represents intellectual curiosity, interesting people, a beautiful view, and well, did I mention it’s on the road to the ski area (a whole 13 miles or so to the ski area!)
I’ll admit it, I then missed the final talks as I was talking to my crazy colleauges in the hallway. Plus my brain was full.

5 Replies to “SquinT Live Blogging – Friday Talks”

  1. Hi Dave. Thanks for blogging SQuInT — it’s almost like being there. Regarding your question about fault-tolerance with continuous variables … well, actually, I’m not sure I understand the question. I don’t think computation with continuous quantum variables can be fault-tolerant any more than computation with continuous classical variables (a.k.a. analog classical computation) can be. At least not if we consider errors in which the variables drift smoothly, like errors due to quantum diffusion of an oscillator. The point of our paper (Gottesman-Kitaev-Preskill) is that fault-tolerance can be achieved by “digitizing” the continuous variable system, i.e., by embedding a finite-dimensional system inside it. Our idea is sort of like digitizing a continuous classical mechanical system by introducing a potential with two minima, and encoding a bit by preparing the system in one minimum or the other. The system might heat up, but if it stays cool enough so that it’s unlikely to surmount the potential barrier, then we can restandardize it (cool it again) to protect the information. In the quantum case, we can confine the system in both p and q (actually by introducing a lattice in phase space) and encode a qubit by preparing the system in either of two sublattices (or a superposition of same). The system might drift away from its original prepared state, but if it doesn’t drift too far we can restandardize it (e.g., by measuring the syndrome of a stabilizer code and applying a recovery operation).

  2. Actually, Dave, what I said was there wasn’t a rigorously proven threshold for CV cluster states, and I did cite GKP as an example of previous thoughts on the subject. I agree with John that there will need to be some discretization of the information at some point or its hopeless. But the goal of our project right now is to try to do optical cluster state computation from the “top down”, by focusing on efficient, large state preparation, and then seeing what types of error correction and fault tolerance are natural to add on afterwards. I want to be clear that we do not have any new ideas about fault tolerance or error correction yet, and these are major open problems for CV cluster states. I hope to address these issues in the near future. But by showing how one might generate large CV cluster states easily, I (and my coauthors) hope to spur more work on the subject, as we’ve seen happen with, say, adiabatic quantum computation.

  3. Steve, John: Yes I know you were talking about the cluster state version. What I was trying to remember was what I am supposed to say if someone asks me for the threshold for the GKP version of using discrete versions of continuos variable systems.
    Jim: Doh. Lost that in an update!

  4. There’s a factor of a billion missing from the number of times the photon bounces in the cavity (on average) from Haroche’s talk.

  5. Oh, okay. Well regarding the threshold, we described in the paper how to realize a universal set of protected gates acting on the qubits encoded in the continuous variable system, so the issue is whether these encoded gates satisfy threshold conditions that have been estimated for quantum computation using qubits.
    How we describe the threshold condition for the underlying continuous variable system depends on the noise model we want to use. The codewords are highly squeezed states and encoded Clifford group gates are realized using Gaussian operations. The encoded operations can have imperfect fidelity because of imperfect preparation of the codewords, because of quantum diffusion occurring while the operations are being performed, and because of inaccuracies in homodyne detection — this can be quantified. For universality, we also need, in addition to the Clifford operations, the ability to prepare noisy magic states that can be distilled using Clifford operations. For the magic state preparation, we use (non-Gaussian) photon counting, so the noise in the magic states can be related to imperfections in the counters and other things. In summary .. there is a threshold estimate that can in principle be stated in terms of the operations acting on continuous quantum variables, but it is not simple to state it precisely.

Leave a Reply

Your email address will not be published. Required fields are marked *