The Nobel Prize in Physics for 2009 has been announced and goes to Charles K. Kao for “for groundbreaking achievements concerning the transmission of light in fibers for optical communication” and to Willard S. Boyle and George E. Smith for the “for the invention of an imaging semiconductor circuit – the CCD sensor.”
I’m crazy busy so don’t have time to comment on the physics of these awards at the moment, but the thing that struck me about this selection will probably strike a few others and can be summarized in two words: Bell labs. Boyle and Smith are retired from Bell labs which is also where they invented the CCD. And today…. Well today Bell labs does not do any basic physics research. Instead its current owner, Alcatel-Lucent has Bell labs focused on “more immediately marketable areas such as networking, high-speed electronics, wireless, nanotechnology and software.” In other words, you can pretty much bet that when you plot Bell labs nobel prizes verses time you will see an amazing bubble, leading to a complete collapse.
Oh, and by my count that makes two McGill grads with Nobel prizes this year so far (Boyle in physics, Szostak in medicine.)
Another lesson of these Nobel prizes is that [i]quantum limits can be approached[/i]
Both CCDs and optical fibers approach these limits quite closely; light fibers in terms of transparency (limited by atomic-scale fluctuations in SiO2 density), and CCDs in terms of quantum efficiency (limited by the no-cloning theorem … you can’t detect the same photon twice!).
In fact, the no-cloning theorem also impacts fiber optics, in the sense that no (phase-independent) power amplifier can exhibit a noise figure less than 10 log_10(2) = 3dB. This fundamental quantum limit (which commercial fiber-optic amplifiers *already* approach within 0.1 dB) is due to the ubiquitous Carlton Caves, and furthermore, it is interesting (as reviewed in Section 3.2.8 of our Practical Recipes article in NJP), that Caves’ limit is equivalent to all of the other (there are many of them) standard quantum limits on sensing and/or amplification and/or feedback control.
Among all the standard quantum limits, Caves’ is IMHO uniquely distinguished by being nondimensional *and* in being the easiest to remember with the factors of two correct.
The question naturally arises, what *other* technologies are pressing against fundamental quantum limits (and informatic limits, these being fundamentally the same limits) … to which the answer (or question) obviously is … uuuhhhhhh … *ALL* of them?
And so the *real* question this Nobel Award suggest is: “Which technologies will be the *next* to approach the quantum limits?”
Because we can prudently bet that vigorous industries will emerge, based upon them.
It’s interesting that the corporate R&D facilities that used to be at the cutting edge and have since devolved into facilities designed to improve the short-term bottom line, are all older facilities. The ground-breaking corporate R&D stuff these days seems to be coming from companies founded in the past forty years (IBM being a notable exception).
I agree 100% with Dave *and* with Ian … “It was the best of times, it was the worst of times.”
The challenge for young folks, perhaps, is to foresee which modern enterprises are entering the former times rather than the latter times.
Obviously this is mighty tough to foresee … 🙂
Pieter Kok: John, I sincerely hope that quantum limits (at least the standard ones) can be beaten! 😉
Not to disappoint you, Pieter, but IMHO the quantum limits *cannot* be beaten (unless QM is physically wrong, which I very much doubt it is) … any more than the laws of thermodynamics can be beaten.
One the other hand, the scientific literature often is imprecise as to what actually “the quantum limits” actually are; the result is that it is deplorably common that articles adopt a lax or imprecise definition … and then shoot it down.
Hence my appreciation for Caves’ 3 dB quantum limit, which is precisely defined, and rigorously derived, from quantum principles that are exceedingly general.
Do you want to break Caves’ limit? It’s easy. Just relax the constraint “phase-independent” … hey, you’re done … submit the article to PRL! 🙂
But this approach to beating quantum limits, while mathematically and physically valid, is not very useful in engineering practice, for two reasons (1) as the noise figure of one amplification quadrature is adjusted to exceed Caves’ limit, the noise figure other quadrature necessarily degrades. (2) In observational applications, in general one *doesn’t* know the phase of the signal, and so we are just as likely to degrade the SNR as to improve it.
These considerations go far in helping us to understand why enterprises like gravity wave detection have tiptoed up to Caves’-type standard quantum limits, but have not exceeded them.
To put a crisp point on it, how long will it be before advances in QIT lead to phase-independent optical power amplifiers having a noise figure better than 3 dB?
The answer (IMHO) is simply: Never.
Alternatively: When exact cloning of quantum states becomes possible too.
Alternatively: When the second law of thermodynamics is proven to be wrong too.
Alternatively: When the *first* law of thermodynamics is proven to be wrong too.
The bottom line (IMHO) is simply this: articles on the theme that “the standard quantum limit has been beaten” need to be read in the same spirit of careful scientific inquiry as articles on the theme “the second law of thermodynamics has been beaten.”
The authors may well have accomplished something very clever, but nonetheless such claims deserve *very* careful scrutiny as to *precisely* what has been demonstrated.
As with claims that “the second law of thermodynamics has been beaten”, investigation will generally discover (in Thoreau’s memorable phrase) that there is a “trout in the milk.” 🙂
This curmudgeonly attitude does *not* mean that I am any kind of pessimist regarding quantum mechanical math, science, or engineering. To the contrary, IMHO most practical technologies (both hardware *and* simulation) are presently so far from approaching the fundamental limits that are imposed by QM/QIT, that opportunities for creative research are effectively unbounded.
Oh, and by my count that makes two McGill grads with Nobel prizes this year so far (Boyle in physics, Szostak in medicine.)
A w00t for our illustrious alums. McGill’s press release says that Szostak started his undergrad degree in cell biology at the age of fifteen. Apparently prodigies aren’t limited to mathematics, physics and music after all.
We aren’t really disagreeing, Peter … because as the noise figure for the measured phase improve, doesn’t the noise figure for the measured ‚å©N‚å™ degrade? So that that Caves’ caveat applies?
It’s really more a question of culture IMHO.
Engineers want quantum limits that can’t be broken … unless the physicists are wrong about the fundamental laws of physics. And such limits undoubtedly exist (Caves’ amplifier noise limits are among them).
Physicists want quantum limits that *can* be broken … provided that physicists devise clever schemes that evade conventional engineering wisdom (the phase measurements that you describe belong to this class).
The result is that both sides independently define “quantum limits” so as to obtain exactly their desired result … with the result that (upon careful analysis) there is *no* mathematical or physical incompatibility between these two styles of research.
Plenty of surprises can result, however. An instructive example (from gravity wave detection) is to calculate a “standard quantum noise limit” for free test masses, and then show that this limit can be broken by interferometric methods for phase-independent detection.
Hokey schmokes!!! Apparently Caves’ limit has been broken!
But we can calm ourselves by looking for the “trout in the milk”. And that trout is not easy to spot on paper … it is that the light beam itself (from the interferometer) serves as a spring … so that the test mass is *not* free, but rather is spring-suspended.
This “trout” is not an academic quibble — the light beam itself in advLIGO will be more rigid than an equivalent (two-kilometer) diamond bar.
The bottom line is that in practical devices like LIGO, Caves-type limits *cannot* be broken, no matter what exotic quantum light states are injected into the device.
It is perfectly feasible, however, to inject squeezed states that provide stiffer optical springs for lower optical powers, which (potentially, it’s not easy to generate these states) have beneficial effects on both the signal-to-noise ration *and* the infamous Sidles-Sigg parametric instabilities.
For me, the best way to derive rigorous Caves’-type bounds for linear measurement schemes is to write down the most general path integral (arXiv:quant-ph/0211108v3) and deduce limits that follow solely from the fact that QM *has* a path integral representation () … it was this path-integral work that led to discovery of Sidles-Sigg instabilities (although the published analysis of that instability is by a shorter, more classical route).
Our recent NJP Practical Recipes article is a nonlinear generalization of this early path-integral exercise. Still, it was this path-integral exercise that warmed my appreciation of the strength of Caves-style quantum limits.
The preceding has been (obviously) an ultra-orthodox and ultra-conservative view of how to formulate non-“Cavesian” quantum limits that *can* be broken (essentially by engineering cleverness) … as contrasted with “Cavesian” quantum limits (that are breakable only if quantum physics is broken too).
At the end of the day, IMHO everyone is right. It is “merely” Cavesian versus non-Cavesian expectations and narratives that differ among disciplines, not the fundamental math and physics.
Pieter, these consideration are at the forefront of my thinking because (as Dave knows) our UW QSE Group is writing-up a successor to that NJP article, which advances the quasi-heretical notion that simulating quantum spin systems belongs to the same complexity class as simulating classical spin systems.
For this to be true, there have to be some “trouts in the milk”.
One “trout” is that the spins must be in (possibly weak) thermal contact with a reservoir … another “trout” is that no ancilla bits can be introduced in the course of the simulation.
The consequence (of course) is that Shor’s algorithm *cannot* be simulated with classical resources. On the other hand, there are a tremendous number of real-world spin systems that *can* be efficiently simulated.
For all you Seattle folks, we’ve decided to give a series of five seminars on this “concentration-and-pullback” symplectic framework, titled: Concentration conjectures and pullback frameworks for classical and quantum spin simulations
The series starts this coming Friday, at 2:00 pm, in Room K450 of the Health Sciences complex. This is the same room, and the hour before, the Baker Group’s synthetic biology seminar … which *also* focuses on large-scale concentration-and-pullback symplectic simulations.
And yes, ME students can audit these lectures for one hour of credit … at present there is a glorious total of *one* student signed-up! 🙂
John, I sincerely hope that quantum limits (at least the standard ones) can be beaten! 😉
That’s a lot of humble opinions! 😉
The standard quantum limit is often expressed as a Cramer-Rao bound d\phi > 1/\sqrt{N}, where N is the number of independent trials. There is a huge experimental effort underway to actually beat this limit.
Even if you cannot improve the precision across the full domain of a phase, very often you are only interested in small phases anyway, in which case you want your highest sensitivity around \phi = 0. And strictly theoretically, noon states (|N,0>+|0,N>) retain their phase sensitivity over the entire phase domain (although any noise will screw all this up).
So beating the SQL is (IMHO) not at all in the same class as perpetuum mobilae of the second kind.
Glad we agree, John. I’ll check out your NJP; it sounds interesting.