The physics Nobel prize has been announced. This years winners are John Hall (University of Colorado), Theodor Hansch (MaxPlanck Institute in Garching) and Roy Glauber (Harvard). The first two for experimental work in high precision laser spectroscopy and the later for theoretical work in quantum optics. Sweet! I note with some delight that Ted Hansch’s research group currently oversees work which is strongly motivated by the question to build quantum information processing devices. Which reminds me of something I like to say to my experimental friends in quantum information science: “First one to a quantum computer gets a Nobel prize!”
Marilyn >> Newton!
Hawking interview in the Guardian. Funny:
“If you could go back in time, who would you rather meet, Marilyn Monroe or Isaac Newton?” and after 10 minutes he says in that voice that makes the blandest statement sound profound: “Marilyn. Newton seems to have been an unpleasant character.”
The interview reminds me of attending a physics talk by Hawking at Caltech. What was awesome was that after the talk, there was a question from the audience. And then, because it takes Hawking a long time to compose his answer, the big wig professors at Caltech began to debate what exactly Hawking’s answer should be. Wow, very fun stuff to witness! Then after about 15 minutes came Hawkings answer. We all waited in antipication for that famous digitized voice. His answer to the question: “No.”
Dirac >> Feynman?
Information Processing points to a review by Freeman Dyson in the New York Review of Books of Perfectly Reasonable Deviations From The Beaten Track: The Letters Of Richard P. Feynman .
What I find interesting in the article is Dyson’s claim that Paul Dirac was a greater genius than Richard Feynman. Of course, judging “greater genius” seems about as silly as worrying about whether it is a “double backward doggy dare” or a “triple backwards over-the-top doggy dare.” With this caviot, I however, just have to say “huh?”
In my mind Dirac has three claims to genius: his derivation of the Dirac equation, his work on magnetic monopoles (showing that the existence of a single such monopole could explain the reason charge comes in discrete quantities), and his work unifying the early differing approaches to quantum theory. Feynman, in my mind, has four or more claims to genius: his derivation of the path integral formulation of quantum theory, his space-time approach to solving the problems in quantum electrodynamics, his work on the theory of superconductivity (showing the importance of quantum theory on a “macroscopic” level), and his model of weak decay (work with Gell-Mann which was also independently done by George Sudharsan and Robert Marshak.) So in my mind, I put Feynman just above Dirac (what, you mean you don’t have your own personal ordering of geniuses?)
And, after thinking about it for a while (too much time, perhaps!) I think I can guess why Dyson puts Dirac above Feynman (oh, to be a physicist known by your last name alone!) I believe the reason is that Dyson was originally a mathematician. Feynman’s work is filled with the sort of raw physical insight that physicists love and admire. Sure, making the path integral rigorous is a pain the rear, but it works! In Dirac’s work, we find, on the other hand, a clear mathematical beauty: the Dirac equation and the magnetic monopole are motivated more my arguments of symmetry than by any appeal to a physicist’s “calculate and run” methodologies (indeed the latter is not even known the correspond to experimental reality!)
So who is the greater genius? Well I “double dog dare you” to come up with reasons that Dirac is a greater genius than Feynman.
Update: See the comments for some fun back and forth. OK, in my head really I put Dirac and Feynman at the same level. What I find intersting is how one’s background influences this (silly) debate. If you are a particle theorist, I bet Dirac>Feynman. If you went to Caltech as an undergrad, I bet you have Feynman>Dirac. Ah, the ways theorists waste away their days.
Brain Eater Syndrome
A very amusing observation about crackpots. The last line is a classic. I wonder what occupation (besides physicists!) has the highest number of physics crackpots?
Mach-Zehnder Gone Bad
Posted, without comment, in order to protect the identity of certain “experimentalists” aiding a “theorist” to take pictures of a theorist’s conception of a Mach-Zehnder interferometer:
Gamma Watch
Just what every physicists has dreamt of: a watch with a Geiger counter in it! Of course, you can tell it is for physicists as well by the lack of style!
Quantum Soccer
Wee Kang Chua here in Singapore has pointed me to Quantum Soccer. If you want to decrease your productivity by a good ten, twenty percent, please click on the link.
Life Around Black Holes
I just started reading A Fire Upon the Deep by Vernor Vinge. Interestingly in Vinge’s universe there is a stratum for the laws of physics in the universe. In particular in the galaxy proper, the speed of light is finite, but as one gets farther away from the galaxy this changes. The futher one gets from the galaxy, the more amazing technology which one can build and operate.
Which got me thinking about Scott Aaronson’s paper NP-complete Problems and Physical Reality. In this paper, one issue Scott discusses something which you will hear over many coffee breaks at quantum computing conferences: can one use relativity to create exponential speedups in computational power. One way you might think of doing this involves using time dialation. Set your computer up to calculate some hard problem. Then board a spaceship and head off for a leisurely trip around the galaxy at speeds nearing the speed of light. When you return to your computer, via the twin paradox the computer will be much older than you, and will, hopefully have solved the problem. Roughly if your speed is [tex]$beta=v/c$[/tex], then you can get a speedup for your computation which is proportional to [tex]$(1-beta^2)^{-frac{1}{2}}$[/tex]. The problem with this scheme, it appears, is that in order to work, you need to get your [tex]$beta$[/tex] exponentially close to the speed of light, and this would require an exponential amount of energy. So it doesn’t seem that such a scheme would work. Another proposal, along similar lines, is to set up your computer, and then travel very close to a black hole (not recommended, only trained professionals should attempt this.) Then due to the gravitational time dilation, you can mimic the twin experiment and return to a (hopefully) solved problem. However, again, it seems that to get yourself out of the computational well requires an amount of energy which will destroy the effect.
But what Vinges’s novel got me thinking about was the following: there appears to be a computational advantage to being away from masses. Assume that there is some form of life surrounding a black hole (OK, big assumption, but hey!) Then it seems to me that this computational advantage for an intelligent species might contribute to a competetive advantage in the evolutionary sense. Thus we might expect that a civilization for which gravitational time dilation is a real effect will live in a world, much like Vinge’s world, where lesser intelligent animals live close to the mass and the more intelligent, more magic wielding creatures live farther away (“Any sufficiently advanced technology is indistinguishable from magic”-Arthur C. Clarke.) Following Vinge’s novel, one might even speculate about the comutational advantage of being outside the galaxy. The time dilation effect there is about one part in one million (as opposed to one part in a one billion for the effect from being at the surface of the earth versus “at infinity.”) Unfortunately this seems like to small of a number to justify any such effect.
OK, enough wild speculation for a Tuesday.
Best Title Ever Submission: Pancakes
Steve submits the following for best paper title ever, astro-ph/0509039: “Local Pancake Defeats Axis of Evil” by Chris Vale. I wonder if Chris has informed North Korea, Iraq, and Iran that they have been defeated by a local pancake (is it better to be defeated by a nonlocal pancake?)
Actually this paper is about a very interesting subject. One can learn all sorts of interesting things about the cosmological history of our universe from the angular spectrum of the cosmic background radiation. I remember as a graduate student, when I thought I might go into astrophysics, taking a class in which we calculated this spectrum for all sorts of different cosmological models. The bumps in the spectrum across different spherical harmonics were distinctly different for many different models. Well now, thanks to experiments like WMAP, we have exceedingly good information about this spectrum (the music of the univerese, in poetic language.) This allows us to very nicely rule out all sorts of models about the early history of our universe. Interestingly, however, there is a possible unexplained feature in the spectrum. This is that the l=2 and l=3 components appear to be correlated. One explanation of this effect is simply that this is a statistical fluke. This explanation is commonly refered to as the “axis of evil” theory! The “local pancake” in the title of the paper refers to the authors theory about this l=2,l=3 anomoly: he postulates that it is the effect of gravitational lensing due to the structure of mass in our local neighborhood (bet you didn’t know we lived in a pancake did you?) This lensing, the author claims, will have a consequence for the l=1 (dipole) term. But why does this change the l=2, l=3 components? Because the l=1 dipole term is usually subtracted out from the data because it has large components due to our proper motion with respect to the cosmic background radiation. The author claims that the lensing effect causes this l=1 dipole term to be subtracted incorrectly. Any good astrophysicists out there slumming on a quantum blog care to comment?
Quantum Dot Qubits
I had seen these results in a previous talk, but now the paper is out in Nature Science. Charles Marcus’s group at Harvard has been working on building a qubit from a semiconductor quantum dot. One of the difficulties for this type of qubit is that the qubit couples via a hyperfine interaction to surrounding nuclei. If you don’t do anything about this hyperfine interaction, this causes a typical Rabi flopping experiment to exhibit decoherence rates of 10 nanoseconds. Way to short to make a useful quantum computer! But what is nice about the hyperfine decoherence mechanism, is that it is a constant coupling which can be overcome by doing a spin echo experiment. In the above paper, Marcus and colleagues show that with appropriate spin echo techniques, they get lifetimes of 1 microseconds. Nothing like that many orders of magnitude improvement, no?