"Drawing Theories Apart : The Dispersion of Feynman Diagrams in Postwar Physics" by David Kaiser

Some of you have accused me of Feynman hero worship. To which I plead guilty, but with exceptions! I certainly admire the guy for certain qualities, but like everyone, he was human and so comes along with all the lovely faults and trivialities that make up our entertaining species. But on to the subject at hand: I just finished reading Drawing Theories Apart : The Dispersion of Feynman Diagrams in Postwar Physics by David Kaiser.
This book was once a thesis. Parts of it read like it was once a thesis. On the other hand, I really like reading theses. But still, there are times which I wish a little more editing had been done to lead to a more narrative tone.
That being said, I mostly recommend this book to the hard core affectionado of the early history of quantum field theory. But if you are such a rare beast (I suspect most physicsts are!), this book is very entertaining. The most interesting component of the first half of this book involves the “split” between Feynman and Dyson on their take on the diagrams (interestingly, early on, the diagrams were often referred to as Feynman-Dyson diagrams) and how this difference could be traced through the postdocs and graduate students who learned the techniques from either Feynman or Dyson. It is interesting how the rigor of Dyson and physical intuition of Feynman could be explicitly seen in how they drew the diagrams. Dyson would draw the diagrams always with right angles, clearly indicating that they were simply a tool for bookkeeping the perturbation theory. Feynman’s diagrams on the other hand, had tilted lines, much more suggestive of the path integral formulation of quantum theory which Feynman had in mind in coming up with the rules for the diagrams.
The second half of the book is dedicated to a study of Geoffrey Chew and his idea of nuclear democracy. I certainly wish that this part of the book had more details, as this story is fascinating, but on the whole the book gives a nice introduction to the S-matrix dispersion tools and the basic ideas of the bootstrap and looks at how diagramatic methods played a role in this work (no longer really Feynman diagrams.) Interestingly I learned that Chew was probably the first professor to resign in protest over the University of California’s requirement of an anti-communist oath. Good for Chew.

Letters in the Sky with Dialogue

Steve Hsu is at it again with an interesting paper, this time with Anthoy Zee (UCSB). And this one has to be read to be believed: physics/0510102:

Message in the Sky

Authors: S. Hsu, A. Zee
Comments: 3 pages, revtex
Subj-class: Popular Physics
We argue that the cosmic microwave background (CMB) provides a stupendous opportunity for the Creator of our universe (assuming one exists) to have sent a message to its occupants, using known physics. The medium for the message is unique. We elaborate on this observation, noting that it requires only careful adjustment of the fundamental Lagrangian, but no direct intervention in the subsequent evolution of the universe.

I especially like the last paragraph:

In conclusion, we believe that we have raised an intriguing possibility: a universal message might be encoded in the cosmic background. When more accurate CMB data becomes available, we urge that it be analyzed carefully for possible patterns. This may be even more fun than SETI.

The Power of Really Really Big Computers

An interesting question at the talk Sunday was, suppose you build a quantum computer, how do you really know that these crazy quantum laws are governing the computer? What I love about this question is that you can turn it around and ask the same question about classical computers! At heart we believe that if we put together a bunch of transistors to form a large computation, each of these components is behaving itself and there is nothing new going on as the computer gets larger and larger. But for quantum computers there is a form of skepticism which says that once your computer large enough, the description of quantum theory will fail in some way (See for example, Scott Aaronson’s “Are Quantum States Exponentially Long Vectors?”, quant-ph/0507242, for a nice discussion along these lines.) But wait, why shouldn’t we ask the question of whether classical evolution continues to hold for a computer as it gets bigger and bigger. Of course, we have never seen such a phenomenon as far as I know. But if I were really crazy, I would claim that the computer just isn’t big enough yet. And if I were even crazier I would suggest that once a classical computer gets to a certain size its method of computation changes drastically and allows for a totally different notion of computational complexity. And if I were flipping mad, I would suggest that we do have an example of such a system, and this system is our brain. Luckily I’m only crazy, not flipping mad, but still this line of reasoning is fun to pursue.

Talking in L.A. Talking in L.A. Nobody Talks in L.A.

Yesterday I gave a talk on quantum computing in Los Angeles at math-club. What is math-club? From their website (“be there or be square”):

People who like books and stories have movies, libraries, and even book clubs to amuse them. Those interested in math and logic have amazon.com and the occasional brain teaser at the back of in-flight magazines.
MATH-CLUB was formed to engage a group of LA’s math-inclined people in analytical discussions under an easy going atmosphere.
The club got rolling under suitably informal circumstances — its founder, Roni Brunn, brought up the idea for a math club at a bar and was lucky enough to do so within ear shot of a friend, Matt Warburton, who liked the concept and thought of others who would as well. Soon after that, we had our first math club meeting.
Stewart Burns lectured on a Sunday afternoon in September, 2002, to a crowd that included David X. Cohen, Kirsten Roeters, April Pesa, Ken Keeler, Daisy Gardner, Sean Gunn, James Eagan, and, of course, Matt, using most horizontal surfaces at Roni’s apartment as seating.
Because many of those most actively involved with MATH-CLUB do have writing credits at shows like “The Simpsons” and “Futurama,” some assume the club is only the province of professional Hollywood writers. In fact, the club as a whole is a diverse group of people who punch the clock as professors, computer programmers, musicians, actors, designers, journalists, and documentarians.
Similarly, people come to meetings with a wide range of math backgrounds. Some of the members have advanced degrees in math and have been published; some are recreational users of math. People do ask questions and explore the topic as a group. We kick off meetings with a cocktail party.

Who could resist talking to such a club? Certainly not me. Especially when I received an email from the organizer, Roni Brunn, which had as it’s subject “math and simpsons.” Such subject lines stimulate approximately 90 percent of my brain. If you include the word “ski” as well, then the extra 10 percent is activated and I go into epileptic seizures. Not a pretty sight.
I am especially fond of the math club analogy with book clubs (which were described to me by one of the people present last night as “mom’s drinking night.” Crap, now I’m going to get an email from my mother.) Why aren’t there more math clubs in the mold of book clubs: where small groups get together to hear about subjects which stretch their critical brains? I certainly think it’s a grand idea and am tempted to start a math club myself (but in Seattle we replace writers with programmers?)
When I was at Caltech I would often attend the talks in fields outside of physics/computer science. I called it “my television.” Certainly hearing a biology or geology talk and trying to discern what the heck they were talking about made me feel like a total moron. But whenever you hear a good talk outside of your field, you get a little bit of the feeling for what is going on, and this feels great. I personally wish that more graduate students would do this in order to help combat the graduate student fatigue which comes from being far to narrowly focused and not remembering why it is that all of science is interesting.
Anyway, it was a fun talk, and hopefully I didn’t punish people too much. When I was in Italy recently, I noticed that when I noticed that the students were not understanding the English I was using, I would try to fix this by reverting back to speaking in idioms, which are simpler, of course, but totally incomprehensible to the Italian students. I noticed last night that I sometimes do a similar thing when I give a talk: when I’m talking about something and I think people are lost, I oscillate back to a language which is far simpler but whose content doesn’t really help to discern my original meaning. What I need to learn to do is to ramp down to medium levels with content. Ah well, much to learn about this “giving talks” thing.

Ig Nobel Prizes 2005

The 2005 Ig Nobel Prizes are out! The physics prize goes went to the University of Queensland:

PHYSICS: John Mainstone and the late Thomas Parnell of the University of Queensland, Australia, for patiently conducting an experiment that began in the year 1927 — in which a glob of congealed black tar has been slowly, slowly dripping through a funnel, at a rate of approximately one drop every nine years.

I’ve actually seen this glob of black tar. Little did I know I was looking at an experiment of Nobel proportions! Makes me wish I’d taken a picture of it.

Holography Oversold?

Warning: this post is about a subject I know a tiny tiny bit about. I suspect I will have to update it once I get irrate emails pointing out my horrible misunderstandings.
Roman Buniy and Stephen Hsu (both from the University of Oregon…quack, quack…the mascot of UofO is the Duck!) cross listed an interesting paper to quant-ph today: hep-th/0510021: “Entanglement entropy, black holes and holography.” (Steve posted about it on his blog) As many of you know, the idea of holography is that the number of degrees of freedom of a region of our universe scales proportional to the area of surface of the region. This strange conjecture is extremely interesting, and bizarre, because it raises all sorts of questions about how such theories work (I especially have problems thinking about locality in such theories, but hey that’s just me.) One line of evidence for the holographic principle comes from black hole physics. One can formulate a thermodynamics for black holes, and this thermodynamics gives an entropy for a black hole which is proportional to its area. Another interesting fact is the AdS/CFT correspondence which shows an equivalence between a certain quantum gravity theory in an anti-deSitter universe and a conformal field theory on the boundary of this space: i.e. quantum gravity in this space can be described by a theory on the surface of the space, a holographic theory, so to speak. Indeed, the fact that certain string theories have black holes which have a holographic number of degrees of freedom is taken as evidence that string theory might be consistent with our universe.
What Buniy and Hsu suggest in their paper is that the holographic bound is not a bound on the degrees of freedom of our theory of the universe, but that instead, the holographic bound should be thought about as a bound on entropy of a region in the presence of gravity. They point out that if you take gravity away, then the scaling of the degrees of freedom scales like the volume (although, if you take the ground state of a local quantum field theory, then this particular state has an entropy which scales like the area: such states that Buniy and Hsu consider are therefore necessarily not ground states of such theories. But this doesn’t mean that they don’t exist or that we can’t construct such states.) They then argue that if, on the other hand, you want to avoid gravitational collapse, then this requirement precludes such states, and indeed gives you states whose entropy scales like the area. What Buniy and Hsu seem to be arguing is that while one does obtain entropies which scale like the area using these arguments about black holes, this doesn’t imply that the degrees of freedom of the underlying theory must scale as the area.
One might wonder whether there is a difference between having an entropy scaling like the area and the degrees of freedom scaling like the area. Well certainly there would be for an underlying theory of quantum gravity: presumably different degrees of freedom can be accessed which give the same area scaling, but which represent fundamentally different physical settings. So, for example, I can access some of these degrees of freedom, and as long as I don’t create a black hole, these degrees will be as real for me as they can be. But if I try to access them in such a manner that I create a black hole, I will only see the effective degrees of freedom proportional to the area of the black hole.
Which is all very interesting. Just think, maybe one of the greatest achievements of string theory, deriving holographic bounds, actually ends up being a step in the wrong direction. And, no I’m not wishing this fate upon string theory. I wish no fate among any theories: I just want to understand what nature’s solution is.

Nature Physics

The first issue of Nature Physics is out (which was where the article by Brassard on information and quantum theory appeared.) From the opening letter from the editor

Authors may be pleased to know that manuscripts can be submitted to Nature Physics not only in Microsoft Word, but in LaTeX too.

Which caused me to almost fall out of my chair laughing. Welcome to the modern world, Nature publishing.

Super Solid Helium?

Yesterday I went to a condensed matter seminar on “super solid Helium” by Greg Dash. What, you ask, is super solid Helium? Well certainly you may have heard of superfluids. When you take Helium 4, and cool it down, somewhere around 2 Kelivin the liquid He4 (assuming the pressure is not too high so that it does not solidify) makes a transition to a state of matter, the superfluid, where it is a liquid which doesn’t have any viscousity. Well actually I think what happens is that you get a state of matter which has one part which is superfluid and the other part which is normal. The superfluid part of the liquid, along with having no viscosity, also has infinite thermal conductivity so it’s impossible to set up a temperature gradiant with a superfluid. He3 also froms a superfluid at cold enough temperatures. But He3 does this at a much lower temperature, I believe around a few micro Kelvin. The mechanisms for superfluidity in these systems is different: in He4 it is bose condensation of the He4’s themselves (which are bosons) and in He3 it is a bose condensation of the He3 which act as composite particles with bose statistics (the mechanism is similar to the role Cooper pairs play in superconductivity.)
So what is super solid Helium? Well the theoretical conjecture is that if you take a solid, this solid has vacancies (i.e. it’s not perfect and there are places where the solid is missing atoms in the lattice where it should have atoms in the lattice.) and it is these vacancies which can form a bose condensate at low enough temperature. So the idea behind super solid Helium is that you have a highly pressurized chunck of cold Helium and below a certain temperature vacancies in the solid will all condense into the same state. Thus in such a substance the vacancies should flow without resistance in the solid. (I say Helium here, but it could possibly occur in other substances as well.)
But the question is, does such a mechanism occur? Over the years, various experiments have been performed looking for super solids and no one has seen any evidence of this strange phase of matter. Well in 2004, Eun-Seong Kim and Moses Chan of Pennsylvania State University performed experiments in which they claimed to have observed super solid Helium.
The basic idea behind the experiment is pretty simple: if you take a superfluid and try to spin it, it will be much easier to spin because of the lack of viscosity of the fluid. Thus if you take a torsional pendulum (a pendulum which instead of swinging back and forth like a normal pendulum is a disk attached to a rod and the disk is rotated by an angle and then this rotation angle oscillates like a pendulum) and start it oscillating, and then cool the system from above the superfluid transition temperature to below the superfluid transition temperature, then the system will all of a sudden become easier to spin, i.e. it’s moment of intertia will decrease. This will result in an increase in the oscillation frequency of the torsion pendulum. So what Kim and Chan did was they got highly pressurized Helium, so that it was solid, and put it on such a torsional pendulum. And at around a one tenth of a Kelvin, Kim and Chan observed exactly the effect of a decrease in the moment of inertia!
Now of course, science isn’t just about one team observing something and then everybody sitting back and saying “yes that must be super solid Helium.” Instead what happens is that (1) theorists get all up in arms trying to figure out if there are alternative explanations for this experiment and begin thinking about how to test these explanations and (2) experimentalists design experiments to duplicate or to make complementary confirmations of different properties super solid Helium would exhibit. The talk I went to yesterday was about some of the theoretical ideas for alternative explanations of the results of Kim and Chan as well as some discussion of more recent results reported by Kim and Chan. Interesting, I’d say that right now there is a stalemate: the alternative explanations now have problems with explaining the experimental results, but new, more recent experiments also exhibit effects which are harder to fit with what the theory of super solid Helium would predict (in particular an experiment I did not understand very well which attempted to verify in a different manner the existence of the super solid phase ( i.e. one of those complementary confirmations seemed to fail.)) What was nice to hear was that a different experimental group was gearing up to repeat the experiment of Kim and Chan. So maybe soon we will have either a confirmation of the effect seen by Kim and Chan, or no confirmation and then trying to figure out what the heck is causing the effect seen by Kim and Chan.
Science in action. Ain’t it beautiful?

Nobel Prize in Chemistry 2005

The Nobel prize in Chemistry this year goes to Robert H. Grubbs (Caltech), Richard R. Schrock (MIT), and Yves Chauvin (Institut Francais du Petrole) for the development of metathesis. Massive misspelling of “Caltech” runs amok among the world’s newspapers.

Is Geometry the Key?

Gilles Brassard has a very nice article in the Commentary section of Nature, “Is information the key?” From the abstract:

Quantum information science has brought us novel means of calculation and communication. But could its theorems hold the key to understanding the quantum world at its most profound level? Do the truly fundamental laws of nature concern — not waves and particles — but information?

The article is well worth reading.
The basic question asked in the paper is whether or not it is possible to derive quantum theory from basic rules about information plus a little bit more. Thus for instance, one can ask, whether it is possible to derive quantum theory by assuming things like no superluminal communication plus no bit commitment (two properties of quantum theory as we understand it today.) To date, there have been some very nice attempts to move this task forward. In particular Brassard mentions the work of Bub, Clifton and Halvorson which is very nice. However, my beef with all such derviations I’ve seen so far is that their assumptions are too strong. For example in the Bub et al work, they assume theory must be described within a C*-algebraic framework. And this assumption just hides too much for me: such assumptions basically are assumption of the linearity of the theory and don’t really shed light on why quantum theory should act in this manner. Linearity, for me, is basically the question “why amplitudes and not probabilities?” This, I find, is a central quantum mystery (well not a mystery, but something I’d like to see a reason given for in the same way that if I had been around in 1900, I would have wanted to see an explanation for the Lorentz transform, which is what Einstein, so beautifully, did.) On the other hand, the fact that one can make these assumptions and derive quantum theory or quantum-like behavior is extremely suggestive and I would be happy if lots of people started thinking about this question.
Personally, I’ve already gone through a stage where I thought this might be the approach to undestanding quantum theory and moved to another stage (just as I have, throughout my life, loved every single interprtation. I even have a strong memory of sitting in a car on a vacation with my family, probably when I was in high school, where I distinctly remember understanding the many-worlds interpretation of quantum theory! On the other hand, I also have a vivid memory of waking up one night while I was an undergrad at Caltech and having an absolutely impossible to disprove reason for why there is evil in the world. Strange feelings, those. “The moment of clarify faded like charity does.”) In work I did with Ben Toner, we showed a protocol for simulating the correlations produced by projective measurements on a singlet using shared randomness and a single bit of communication. For a long time, Ben and I wondered whether we could derive this protocol via more basic assumptions. For example, is there a simple game for which the protocol with one bit of communication is the best strategy (this game also being best solved by bare, unaided with communication, quantum theory?) Of course, one can always define a game such that these correlations and the protocol are the best, but that is cheating: we wanted a simple game to go along with our simple protocol. Alas we could never find such a game or a more basic set of principles from which to derive our protocol. As a funny note, we called the game we were searching for “Warcraft.” And when we were looking for a game which the optimal strategy would yeild all of quantum theory we called it simply “The Game.” What is “The Game” at which quantum theory is the optimal strategy?
After working on “The Game” and related ideas, I’ve become less convinced that this is the proper approach to take. Why? Well mostly due to the structure of the protocol we came up with for simulating the singlet quantum correlations. The important observation, I now believe, about this protocol is its geometric nature. If you look at the protocol (quant-ph/0304076) what is interesting about it, to me, is the beautiful geometry of the protocol. I should mention that recently an reformulation of our protocol has appeared quant-ph/0507120 by Degorre, Laplante, and Roland which is very nice and also demonstrates how simple the geometry involved in the protocol is. So why do I focus on this geometric aspect? Well because I think that the ultimate explanation for why quantum theory is the way it is must, in some way provide answers to the question of hidden variables in quantum theory (prejudice number 1) and that the only way in which I know to get around such obstacles is to muck with the topology of spacetime (prejudice number 2), and thus an attempt to understand our protocol in terms of changes in the topology of spacetime is the proper route to take. However, I haven’t yet succeeded in recasting our protocol in these terms. Perhaps some crazy wonderkid out there can easily see how to do this! (Alternatively I wouldn’t be surprised if my prejudice number two is somehow replaced with particularly strange reasoning about the nature of time. Here I am thinking somewhat of the transactional interpretations of quantum theory.)
Anyway, understanding why quantum theory is the way it is, is either one of the greatest mysteries of physics, or a dead end that will never lead to any experiments. I hope for the former, but, can’t quite convince myself that it can’t be the latter.