Although reputable news sources pointed out that most scientists think some more mundane explanation will be found for the too-early arrival of CERN-generated neutrinos in Gran Sasso, recently confirmed by a second round of experiments with much briefer pulse durations to exclude the most likely sources of systematic error, the take-home message for most non-scientists seems to have been “Einstein was wrong. Things can go faster than light.” Scientists trying to explain their skepticism often end up sounding closed-minded and arrogant. People say, “Why don’t you take evidence of faster-than-light travel at face value, rather than saying it must be wrong because it disagrees with Einstein.” The macho desire not to be bound by an arbitrary speed limit doubtless also helps explain why warp drives are such a staple of science fiction. At a recent dinner party, as my wife silently reminded me that a lecture on time dilation and Fitzgerald contraction would be inappropriate, the best I could come up with was an analogy to another branch of physics where where lay peoples’ intuition accords better with that of specialists: I told them, without giving them any reason to believe me, that Einstein showed that faster-than-light travel would be about as far-reaching and disruptive in its consequences as an engine that required no fuel.
That was too crude an analogy. Certainly a fuelless engine, if it could be built, would be more disruptive in its practical consequences, whereas faster-than-light neutrinos could be accommodated, without creating any paradoxes of time travel, if there were a preferred reference frame within which neutrinos traveling through rock could go faster than light, while other particles, including neutrinos traveling though empty space, would behave in the usual Lorentz-invariant fashion supported by innumerable experiments and astronomical observations.
But it is wrong to blame mere populist distrust of authority for this disconnect between lay and expert opinion. Rather the fault lies with a failure of science education, leaving the public with a good intuition for Galilean relativity, but little understanding of how it has been superseded by special relativity. So maybe, after dinner is over and my audience is no longer captive, I should retell the old story of cosmic ray-generated muons, who see the onrushing earth as having an atmosphere only a few feet thick, while terrestrial observers see the muons’ lifetime as having been extended manyfold by time dilation.
It is this difference in appreciation of special relativity that accounts for the fact that for most people, faster-than-light travel seems far more plausible than time travel, whereas for experts, time travel, via closed timelike curves of general relativistic origin, is more plausible than faster-than-light travel in flat spacetime.
Let Eve do the heavy lifting, while John and Won-Young keep her honest.
Photon detectors have turned out to be an Achilles’ heel for quantum key distribution (QKD), inadvertently opening the door of Bob’s lab to subtle side-channel attacks, most famously
quantum hacking, in which a macroscopic light signal from Eve subverts Bob’s detectors into seeing all and only the “photons” she wants him to see. Recently Lo, Curty, and Qi (“LCQ”) have combined several preexisting ideas into what looks like an elegant solution for the untrusted detector problem, which they call measurement-device-independent QKD. In brief, they let Eve operate the detectors and broadcast the measurement results, but in a way that does not require Alice or Bob to trust anything she says.
Precursors of this approach include device-independent QKD, in which neither the light sources nor the detectors need be trusted (but unfortunately the detectors need to be impractically efficient) and time-reversed Bell-state methods, in which a Bell measurement substitutes for the Bell-state preparation at the heart of most entanglement-based QKD. It has also long been understood that quantum teleportation can serve as a filter to clean an untrusted quantum signal, stripping it of extraneous degrees of freedom that might be used as side channels. A recent eprint by Braunstein and Pirandola develops the teleportation approach into a mature form, in which side channel attacks are prevented by the fact that no quantum information ever enters Alice’s or Bob’s lab. (This paper is accompanied by an unusual “posting statement,” the academic analog of a Presidential signing statement in US politics. This sort of thing ought to be little needed and little used in our collegial profession.) Two more ingredients bring the LCQ proposal to an exciting level of practicality: weak coherent pulse sources, and decoy states. In the LCQ protocol, Alice and Bob each operate, and must trust, a local random number generator and a weak coherent source (e.g. an attenuated laser with associated polarization-control optics) which they aim at Eve, who makes measurements effectively projecting pairs of simultaneously-arriving dim light pulses onto the Bell basis. If Eve lies about which Bell state she saw, she will not be believed, because her reported results will be inconsistent with the states Alice and Bob know they sent. The final ingredient needed to keep Eve honest, the decoy-state technique introduced by W.Y. Hwang and subsequently developed by many others, prevents Eve from lying about the efficiency of her detectors, for example reporting a successful 2-photon coincidence only when she has received more than one photon from each sender. Fitting all the pieces together, it appears that the LCQ protocol would work over practical distances, with practical sources and detectors, and, if properly implemented, be secure against known attacks, short of bugging or eavesdropping on the interior of Alice’s or Bob’s lab.
Alice and Bob still need to trust their lasers, polarization and attenuation optics, and random number generators, and of course their control software. It is hard to see how Alice and Bob can achieve this trust short of custom-building these items themselves, out of mass-marketed commodity components unlikely to be sabotaged. A considerable element of do-it-yourself is probably essential in any practical cryptosystem, classical or quantum, to protect it from hidden bugs. CHB acknowledges helpful discussions with Paul Kwiat, who is however not responsible for any opinions expressed here.
Quantum conferences at ETH-Zurich
ETH-Zurich, the MIT of Switzerland (or Europe), has hosted a spate of quantum conferences lately. The largest, called QIPC, for “Quantum Information Processing and Communications,” covered both experiment and theory, and took place at ETHZ’s suburban Science Center campus, in the HCI building, which looks like an elegant homage to MIT’s legendary Building 20, incubator of radar and much else during its 55 year existence.
A few weeks earlier, QIFT11, a small workshop on Quantum Information and the Foundations of Thermodynamics invoked quantum information to shed a suprising amount of new light on such time-worn problems as Maxwell’s Demon and the origin of irreversible phenomenology from reversible dynamics.
This week, QCRYPT 2011, which boldly calls itself the First Annual Conference on Quantum Cryptography, is taking place in ETHZ’s downtown CAB building, again featuring both theory and experiment.