Uncertain on Uncertainty

Over at BBC News, there is an article about a recently published paper (arXiv) by Lee Rozema et al. that could lead to some, ehm, uncertainty about the status of the Heisenberg Uncertainty Principle (HUP).
Before dissecting the BBC article, let’s look at the paper by Rozema et al. The title is “Violation of Heisenberg’s Measurement–Disturbance Relationship by Weak Measurements”. While this title might raise a few eyebrows, the authors make it crystal clear in the opening sentence of the abstract that they didn’t disprove the HUP or some such nonsense. The HUP is a theorem within the standard formulation of quantum mechanics, so finding a violation of that would be equivalent to finding a violation of quantum theory itself! Instead, they look at the so-called measurement–disturbance relationship (MDR), which is a non-rigorous heuristic that is commonly taught to give an intuition for the uncertainty principle.
The HUP is usually stated in the form of the Robertson uncertainty relation, and states that a given quantum state psi cannot (in general) have zero variance with respect to two non-commuting observables. The more modern formulations are stated in a why that is independent of the quantum state; see this nice review by Wehner and Winter for more about these entropic uncertainty relations.
By contrast, the MDR states that the product of the measurement precision and the measurement disturbance (quantified as root-mean-squared deviations between ideal and actual measurement variables) can’t be smaller than Planck’s constant. In 2002, Masanao Ozawa proved that this was inconsistent with standard quantum mechanics, and formulated a corrected version of the MDR that also takes into account the state-dependent variance of the observables. Building on Ozawa’s work, in 2010 Lund and Wiseman proposed an experiment which could measure the relevant quantities using  the so-called weak value.
Rozema et al. implemented the Lund-Wiseman scheme using measurements of complementary observables (X and Z) on the polarization states of a single photon to confirm Ozawa’s result, and to experimentally violate the MDR.  The experiment is very cool, since it crucially relies on entanglement induced between the probe photon and the measurement apparatus.
The bottom line: the uncertainty principle emerges completely unscathed, but the original hand-wavy MDR succumbs to both theoretical and now experimental violations.
Now let’s look at the BBC article. Right from the title and the subtitle, they get it wrong. “Heisenberg uncertainty principle stressed in new test”—no, that’s wrong—“Pioneering experiments have cast doubt on a founding idea…”—also no—the results were consistent with the HUP, and actually corroborated Ozawa’s theory of measurement–disturbance! Then they go on to say that this “could play havoc with ‘uncrackable codes’ of quantum cryptography.” The rest of the article has a few more whoppers, but also some mildly redeeming features; after such a horrible start, though, you might as well quietly leave the pitch. Please science journalists, try to do better next time.

6 Replies to “Uncertain on Uncertainty”

  1. I think that the confusion is partly caused by the fact that we call several different concepts by the name “Heisenberg Uncertainty Principle” and only some of them can be regarded as fundamental consequences of quantum theory, others depending on very specific choices of measurement setup. If you want to talk about what Heisenberg actually proved then arguably the experiment does show that this is wrong because he was looking at measurement-disturbance relations in very specific systems. The idea that delta x delta p <= hbar/2 for the variances of position and momentum calculated for a single quantum state, i.e. what we now usually call the Heisenberg uncertainty relation, was actually first presented by Bohr. He proved it for a single nonrelativistic particle using Fourier analysis in his Lake Como lectures, so it should probably be called the Bohr uncertainty relation, although the extent to which it arose out of discussions with Heisenberg is unclear. The Robertson uncertainty relation, i.e. the most commonly used general uncertainty relation, is a generalization of Bohr's uncertainty relation rather than of Heisenberg's original work.
    I find it quite odd that the uncertainty principle is regarded by many people as the fundamental principle of quantum theory. You cannot derive the structure of quantum theory from it, at least not from the Robertson type of relation. Instead, it is a heuristic rule that follows from the formalism, which is useful for estimating what will happen in a given physical scenario, but this usually needs to be backed up with a more rigorous derivation. Regarding the uncertainty principle as fundamental is a bit like thinking that the Stefan-Boltzmann law is the fundamental principle behind black body physics. We should probably stop harping on about it so much in undergraduate courses and popular articles.

    1. I think you’re right that the confusion can be caused many-to-one mapping of concepts to names here. And it’s also true that for many physics applications one doesn’t need to distinguish between the different preimages, which tends to reinforce the murkiness.
      Regarding treating HUP as a fundamental principle rather than a consequence, I’m not so sure… I regard the uncertainty principle as a fundamental fact about symmetry, really. E.g., translation invariance -> Fourier theory -> x and p -> HUP. Every step in that chain is very very natural. I’m not sure that “natural” and “fundamental” are precisely the same thing, but I guess what I mean is, I’m more sympathetic than you are to this point. 🙂

      1. From a symplectic point-of-view, the uncertainty principle is simply the peculiar way that physicists have of expressing the geometric theorem “the scalar curvature of the unit sphere is unity.”

    1. I am opposed to their use as an obfuscation device to make quantum theory seem weirder than it really is. However, I am coming round to the idea that the existence of measurement processes in which the information gain is first order in the interaction strength wheras the change in the wavefunction is second order may be an important property of quantum theory. Whether such a process would always give rise to the usual definition of weak values is an open question. I’d like to see more of this work done in terms of POVMs and we need to categorize the full set of measurement procedures that have this property. Also, more work needs to be done to understand the kinds of classical toy-models that can reproduce weak values, since a lot of bold claims are being made and I am not sure if the evidence supports them yet.

Leave a Reply

Your email address will not be published. Required fields are marked *