Who should blog in 2013?

The quantum theory blogosphere has seen some great new additions this year:

But this is not enough! Our researchers are legion. And so must it be with our blogs.
There really are a huge number of creative and interesting people in our field, and it would be great if more of them shared their thoughts and opinions online. Therefore, let’s see if the Quantum Pontiff faithful can convince a few people to start blogging in 2013. It doesn’t have to be a lot: let’s say 10 posts for the year.
So leave the name of someone that you’d like to see blogging in the comments section. When we see these people at QIP we can bug them, “Have you started blogging yet?”
I’ll start things off by naming a few people off the top of my head that I wish would blog: David Poulin, Dorit Aharonov, Patrick Hayden. Perhaps we can even goad some weedy unkempt blogs to till their fields again. Matt Leifer and Tobias Osborne, I’m looking at you. 🙂

Cirac and Zoller win the Wolf Prize for physics

(img credit: left/right)

Ignacio Cirac and Peter Zoller were just announced as winners of the 2013 Wolf Prize for physics. I’m not sure if this is the official citation, but the Jerusalem Post is saying the prize is:

for groundbreaking theoretical contributions to quantum information processing, quantum optics and the physics of quantum gases.

If that isn’t the official citation, then it is certainly an accurate assessment of their work.
Cirac and Zoller are in very good company: the list of previous Wolf Prize winners have all made exceptional contributions to physics, and many of them have gone on to win Nobel prizes.
It’s great to see these two giants of the field get the recognition that they richly deserve. Congratulations to both!

Apocalypses, Firewalls, and Boltzmann Brains


Last week’s plebeian scare-mongering about the world ending at the wraparound of the Mayan calendar did not distract sophisticated readers of gr-qc and quant-ph from a more arcane problem, the so-called Firewall Question.  This concerns what happens to Alice when she falls through the event horizon of a large, mature black hole.  Until recently it was thought that nothing special would happen to her other than losing her ability to communicate with the outside world, regardless of whether the black hole was old or young, provided it was large enough for space to be nearly flat at the horizon.  But lately  Almheiri, Marlof, Polchinski, and Sully argued (see also Preskill’s Quantum Frontiers post and especially the comments on it) that she instead would be vaporized instantly and painlessly as she crossed the horizon.  From Alice’s point of view, hitting the firewall would be like dying in her sleep: her experience would simply end.  Alice’s friends wouldn’t notice the firewall either, since they would either be outside the horizon where they couldn’t see her, or inside and also vaporized. So the firewall question, aside from being central to harmonizing no-cloning with black hole complementarity, has a delicious epistemological ambiguity.
Notwithstanding these conceptual attractions, firewalls are not a pressing practical problem, because the universe is far too young to contain any of the kind of black holes expected to have them (large black holes that have evaporated more than half their total mass).
A more worrisome kind of instant destruction, both practically and theoretically, is the possibility that the observable universe—the portion of the universe accessible to us—may be in a metastable state, and might decay catastrophically to a more stable ground state.    Once nucleated, either spontaneously or through some ill-advised human activity,  such a vacuum phase transition would propagate at the speed of light, annihilating the universe we know before we could realize—our universe would die in its sleep.  Most scientists, even cosmologists, don’t worry much about this either, because our universe has been around so long that spontaneous nucleation appears less of a threat than other more localized disasters, such as a nearby supernova or collision with an asteroid.  When some people, following the precautionary principle,  tried to stop a proposed high-energy physics experiment at Brookhaven Lab because it might nucleate a vacuum phase transition or some other world-destroying disaster, prominent scientists argued that if so, naturally occurring cosmic-ray collisions would already have triggered the disaster long ago.  They prevailed, the experiment was done, and nothing bad happened.
The confidence of most scientists, and laypeople, in the stability of the universe rests on gut-level inductive reasoning: the universe contains ample evidence (fossils, the cosmic microwave background, etc.) of having been around for a long time, and it hasn’t disappeared lately.  Even my four year old granddaughter understands this.  When she heard that some people thought the world would end on Dec 21, 2012, she said, “That’s silly.  The world isn’t going to end.”
The observable universe is full of regularities, both obvious and hidden, that underlie the success of science, the human activity which the New York Times rightly called the best idea of the second millennium.  Several months ago in this blog, in an effort to formalize the kind of organized complexity which science studies, I argued that a structure should be considered complex, or logically deep, to the extent that it contains internal evidence of a complicated causal history, one that would take a long time for a universal computer to simulate starting from an algorithmically random input.
Besides making science possible, the observable universe’s regularities give each of us our notion of “us”, of being one of several billion similar beings, instead of the universe’s sole sentient inhabitant.  An extreme form of that lonely alternative, called a Boltzmann brain, is a hypothetical fluctuation arising within a large universe at thermal equilibrium, or in some other highly chaotic state,  the fluctuation being just large enough to support a single momentarily functioning human brain, with illusory perceptions of an orderly outside world, memories of things that never happened, and expectations of a future that would never happen, because the brain would be quickly destroyed by the onslaught of its hostile actual environment.  Most people don’t believe they are Boltzmann brains because in practice science works.   If a Boltzmann brain observer lived long enough to explore some part of its environment not prerequisite to its own existence, it would find chaos there, not order, and yet we generally find order.
Over the last several decades, while minding their own business and applying the scientific method in a routine way, cosmologists stumbled into an uncomfortable situation: the otherwise successful theory of eternal inflation seemed to imply that tiny Boltzmann brain universes were more probable than big, real universes containing galaxies, stars, and people.  More precisely, in these models, the observable universe is part of an infinite seething multiverse, within which real and fake universes each appear infinitely often, with no evident way of defining their relative probabilities—the so-called “measure problem”.
Cosmologists Rafael Bousso and Leonard Susskind and Yasunori Nomura (cf also a later paper) recently proposed a quantum solution to the measure problem, treating the inflationary multiverse as a superposition of terms, one for each universe, including all the real and fake universes that look more or less like ours, and many others whose physics is so different that nothing of interest happens there.  Sean Carroll comments accessibly and with cautious approval on these and related attempts to identify the multiverse of inflation with that of many-worlds quantum mechanics.
Aside from the measure problem and the nature of the multiverse, it seems to me that in order to understand why the observed universe is complicated and orderly, we need to better characterize what a sentient observer is.  For example, can there be a sentient observer who/which is not complex in the sense of logical depth?  A Boltzmann brain would at first appear to be an example of this, because though (briefly) sentient it has by definition not had a long causal history.  It is nevertheless logically deep, because despite its  short actual history it has the same microanatomy as a real brain, which (most plausibly) has had a long causal history.   The Boltzmann brain’s  evidence of having had long history is thus deceptive, like the spurious evidence of meaning the protagonists in Borges’ Library of Babel find by sifting through mountains of chaotic books, until they find one with a few meaningful lines.
I am grateful to John Preskill and especially Alejandro Jenkins for helping me correct and improve early versions of this post, but of course take full responsibility for the errors and misconceptions it may yet contain.

Are we ready for Venture Qapital?

From cnet and via Matt Liefer, comes news of a new venture capital firm, known as The Quantum Wave Fund. According to their website:

Quantum Wave Fund is a venture capital firm focused on seeking out early stage private companies with breakthrough quantum technology. Our mission is to help these companies capitalize on their opportunities and provide a platform for our investors to participate in the quantum technology wave.

The cnet article clarifies that “quantum technology” means “Security, new measurement devices, and new materials,” which seems about right for what we can expect to meaningfully commercialize in the near term. In fact, two companies (ID Quantique and
MagiQ) are already doing so. However, I think it is significant that ID Quantique’s first listed product uses AES-256 (but can be upgraded to use QKD) and MagiQ’s product list first describes technologies like waveform generation and single-photon detection before advertising their QKD technology at the bottom of the page.
It’ll be interesting to see where this goes. Already it has exposed several areas of my own ignorance. For example, from the internet, I learned that VCs want to get their money back in 10-12 years, which gives an estimate for how near-term the technologies are that we can expect investments in. Another area which I know little about, but is harder to google, is exactly what sort of commercial applications there are for the many technologies that are related to quantum information, such as precision measurement and timing. This question is, I think, going to be an increasingly important one for all of us.

21 = 3 * 7

…with high probability.
That joke was recycled, and in recent work, Bristol experimentalists have found a way to recycle photonic qubits to perform Shor’s algorithm using only $latex lceil log Nrceil + 1$ qubits. (Ok, so actually is was Kitaev who figured this out, as they are basically using Kitaev’s semi-classical QFT. But they found a way to do this with photons.) Further reductions on the number of qubits can be achieved by “compiling” to throw out the unused ones. In this case, we are left with a qubit and a qutrit. The feedback of the measurement is also replaced with the usual postselection. However, buried in the body of the paper is the main advance:

Our scheme therefore requires two consecutive photonic CNOT gates–something that has not previously been demonstrated–acting on qubit subspaces of the qutrit.

This is important progress, but until the factored numbers get a lot larger, the “size of number factored” yardstick is going to be a frustrating one. For example, how is it that factoring 143 on an NMR quantum computer takes only 4 qubits while factoring 15 takes 7?
Anyone have ideas for something more objective, like a quantum analogue of MIPS? (Incidentally, apparently MIPS is obsolete because of memory hierarchies.) Perhaps Grover’s algorithm would be good, as “compiling” is harder to justify, and noise is harder to survive.
Update:
In the comments, Tom Lawson points out the real significance of factoring 21. This is the smallest example of factoring in which the desired distribution is not uniformly random, thus making it possible to distinguish the successful execution of an experiment from the effects of noise.

Haroche and Wineland win Physics Nobel

David Wineland
Serge Haroche

The physics prize was shared between experimentalists Serge Haroche and David Wineland, longtime leaders in the study of atom-photon interaction.  In recent decades both have honed their techniques to meet the challenges and opportunities opened by “quantum information science” which aims to rebuild the theory and practice of communication and computation on quantum foundations.  This change of viewpoint was led by theorists, beginning with John Bell, and was initially regarded skeptically not only by information theorists and computer scientists, on whose turf it encroached, but even by many physicists, who saw a lot of theorizing, verging on philosophy, with little practice to back it up.  Haroche, working often with Rydberg atoms and microwave cavities, and Wineland, with trapped ions and optical fields, took the new approach seriously, and over many years have provided much of the solid foundation of practice that has by now has earned the field the right to be taken seriously.  At the same time both researchers have done their part to restrain the inevitable hype.    A decade and a half ago Haroche, in articles like “Quantum Computing: Dream or Nightmare” pointed out how difficult building a quantum computer would be, while always believing it possible in principle, and in the mean time produced, with his group, an impressive stream of experimental results and technical improvements  that made it ever more practical.  In the same vein, Wineland, when asked if ion traps were the right hardware for building a quantum computer, answered that whatever advantage they had was like being 10 feet ahead at the start of a 10 mile race.  Then like Haroche he went ahead making steady progress in the control and measurement of individual particles, with applications quite apart from that distant goal.
Both men are consummate experimentalists, finding and adapting whatever it takes.  I visited Wineland’s lab about a decade ago and noticed a common dishwashing glove (right handed and light blue, as I recall) interposed between the ion trap’s optical window and a CCD camera focused the ions within.   I asked David what its function was among all the more professional looking equipment.   He said this particular brand of gloves happened to be quite opaque with a matte black inside as good as anything he could get from an optics catalog, meanwhile combining moderate flexibility with sufficient rigidity to stay out of the way of the light path, unlike, say, a piece of black velvet.  Indeed the cut-off thumb fitted nicely onto the optical window, and the wrist was snugly belted around the front of the camera, leaving the fingers harmlessly but ludicrously poking out at the side.  The physics Nobel has occasioned a lot of press coverage, much of it quite good in conveying the excitement of quantum information science, while restraining unrealistic expectations.   We especially like Jason Palmer’s story from earlier this year which the BBC resurrected to explain a field which this Nobel has suddenly thrust into the limelight.   We congratulate Haroche and Wineland as deserving and timely winners of this first Nobel given to people who could fairly be described, and would now describe themselves, as quantum information scientists.
 

Physics World gets high on Tel Aviv catnip

It should be no surprise that loose talk by scientists on tantalizing subjects like backward causality can impair the judgment of science writers working on a short deadline.  A recent paper by Aharonov, Cohen, Grossman and Elitzur at Tel Aviv University,  provocatively titled “Can a Future Choice Affect a Past Measurement’s Outcome?” so intoxicated Philip Ball, writing in Physics World,  that a casual reader of his piece would likely conclude  that  the answer was  “Yes!  But no one quite understands how it works,  and it probably has something to do with free will.”  A more sober reading of the original paper’s substantive content would be

  •  As John Bell showed in 1964, quantum systems’ behavior cannot be explained by local hidden variable models of the usual sort, wherein each particle carries information determining the result of any measurement that might be performed on it.
  • The Two State-Vector Formalism (TSFV) for quantum mechanics,  although equivalent in its predictions to ordinary nonlocal quantum theory, can be viewed as a more complicated kind of local hidden variable model, one that, by depending on a final as well as an initial condition, and being local in space-time rather than space, escapes Bell’s prohibition .

This incident illustrates two unfortunate tendencies in quantum foundations research:

  • Many in the field believe in their own formulation or interpretation of quantum mechanics so fervently that they give short shrift to other formulations, rather than treating them more charitably, as complementary approaches to understanding a simple but hard-to-intuit reality.
  • There is a long history of trying to fit the phenomenology of entanglement into inappropriate everyday language, like Einstein’s “spooky action at a distance”.

Surely the worst mistake of this sort was Herbert’s 1981 “FLASH” proposal to use entanglement for superluminal signaling, whose refutation may have hastened the discovery of the no-cloning theorem.  The Tel Aviv authors would never make such a crude mistake—indeed, far from arguing that superluminal signalling is possible, they use it as a straw man for their formalism to demolish.   But unfortunately, to make their straw man look stronger before demolishing him, they use language that, like Einstein’s,  obscures the crucial difference between communication and correlation.  They say that the initial (weak) measurement outcomes “anticipate the experimenter’s future choice” but that doing so causes no violation of causality because the “anticipation is encrypted”.  This is as wrongheaded as saying that when Alice sends Bob a classical secret key, intending to use it for one-time-pad encryption,  that the key is already an encrypted anticipation of  whatever message she might later send with it.   Or to take a more quantum example, it’s like saying that half of any maximally entangled  pair of qubits is already an encrypted anticipation of whatever quantum state might later be teleported through it.
Near the end of their paper the Tel Aviv group hits another hot button,  coyly suggesting that encrypted anticipation may help explain free will, by giving humans “full freedom from both past and future constraints.”  The issue of free will appeared also in Ball’s piece (following a brief but fair summary of my critique) as a quote attributing to Yakir Aharonov, the senior Tel Aviv author, the opinion that  humans have free will even though God knows exactly what they will do.
The authors, and reviewer, would have served their readers better by eschewing  the “concept” of  encrypted anticipation and instead concentrating on how TSVF makes a local picture of quantum evolution possible.  In particular they could have compared TSVF with another attempt to give orthodox quantum mechanics a fully local explanation,  Deutsch and Hayden’s 1999 information flow formalism.
 

Quantum Frontiers

As a postdoc at Caltech, I would often have lunch with John Preskill.  About once per week, we would play a game. During the short walk back, I would think of a question to which I didn’t know the answer. Then with maybe 100 meters to go, I would ask John that question. He would have to answer the question via a 20 minute impromptu lecture given right away, as soon as we walked into the building.
Now, these were not easy questions. At least, not to your average person, or even your average physicist. For example, “John, why do neutrinos have a small but nonzero mass?” Perhaps any high-energy theorist worth their salt would know the answer to that question, but it simply isn’t part of the training for most physicists, especially those in quantum information science.
Every single time, John would give a clear, concise and logically well-organized answer to the question at hand. He never skimped on equations when they were called for, but he would often analyze these problems using simple symmetry arguments and dimensional analysis—undergraduate physics!  At the end of each lecture, you really felt like you understood the answer to the question that was asked, which only moments ago seemed like it might be impossible to answer.
But the point of this post is not to praise John. Insead, I’m writing it so that I can set high expectations for John’s new blog, called Quantum Frontiers. Yes, that’s right, John Preskill has a blog now, and I hope that he’ll exceed these high expectations with content of similar or higher quality to what I witnessed in those after-lunch lectures. (John, if you’re reading this, no pressure.)
And John won’t be the only one blogging. It seems that the entire Caltech IQIM will “bring you firsthand accounts of the groundbreaking research taking place inside the labs of IQIM, and to answer your questions about our past, present and future work on some of the most fascinating questions at the frontiers of quantum science.”
This sounds pretty exciting, and it’s definitely a welcome addition to the (underrepresented?) quantum blogosphere.

300

Credit: Britton/NIST

One of the more exciting prospects for near-term experimental quantum computation is to realize a large-scale quantum simulator. Now getting a rigorous definition of quantum simulator is tricky, but intuitively the concept is clear: we wish to have quantum systems in the lab with tunable interactions which can be used to simulate other quantum systems that we might not be able to control, or even create, in their “native” setting. A good analogy is a scale model which might be used to simulate the fluid flow around an airplane wing. Of course, these days you would use a digital simulation of that wing with finite element analysis, but in the analogy, that would correspond to using a fault-tolerant quantum computer, a much bigger challenge to realize.
We’ve highlighted the ongoing progress in quantum simulators using optical lattices before, but now ion traps are catching up in interesting ways. They have literally leaped into the next dimension and trapped an astounding 300 ions in a 2D trap with a tunable Ising-like coupling. Previous efforts had a 1D trapping geometry and ~10 qubits; see e.g. this paper (arXiv).
J. W. Britton et al. report in Nature (arXiv version) that they can form a triangular lattice of beryllium ions in a Penning trap where the strength of the interaction between ions $latex i$ and $latex j$ can be tuned to $latex J_{i,j} sim d(i,j)^{-a}$ for any $latex 0<a<3$, where $latex d(i,j)$ is the distance between spins $latex i$ and $latex j$ by simply changing the detuning on their lasers. (They only give results up to $latex a=1.4$ in the paper, however.) They can change the sign of the coupling, too, so that the interactions are either ferromagnetic or antiferromagnetic (the more interesting case). They also have global control of the spins via a controllable homogeneous single-qubit coupling. Unfortunately, one of the things that they don’t have is individual addressing with the control.
In spite of the lack of individual control, they can still turn up the interaction strength beyond the point where a simple mean-field model agrees with their data. In a) and b) you see a pulse sequence on the Bloch sphere, and in c) and d) you see the probability of measuring spin-up along the z-axis. Figure c) is the weak-coupling limit where mean-field holds, and d) is where the mean-field no longer applies.
Credit: Britton/NIST

Whether or not there is an efficient way to replicate all of the observations from this experiment on a classical computer is not entirely clear. Of course, we can’t prove that they can’t be simulated classically—after all, we can’t even separate P from PSPACE! But it is not hard to imagine that we are fast approaching the stage where our quantum simulators are probing regimes that can’t be explained by current theory due to the computational intractability of performing the calculation using any existing methods. What an exciting time to be doing quantum physics!

Having it both ways

In one of Jorge Luis Borges’ historical fictions, an elderly Averroes, remarking on a misguided opinion of his youth,  says that to be free of an error it is well to have professed it oneself.  Something like this seems to have happened on a shorter time scale in the ArXiv, with last November’s The quantum state cannot be interpreted statistically  sharing two authors with this January’s The quantum state can be interpreted statistically.  The more recent paper explains that the two results are actually consistent because the later paper abandons the earlier paper’s assumption that independent preparations result in an ontic state of product form.  To us this seems an exceedingly natural assumption, since it is hard to see how inductive inference would work in a world where independent preparations did not result in independent states.  To their credit,  and unlike flip-flopping politicians, the authors do not advocate or defend their more recent position; they only assert that it is logically consistent.