I’m in Pittsburgh this weekend for FOCS 2005. This is the first FOCS I’ve attended.
This evening there was a panel discussion at the business meeting (free beer!) on “Exciting the public about (theoretical) Computer Science.” (Update: for a detailed description of the business meeting see Rocco Servedio’s guest post at Lance Fortnow’s blog and for a real computer scientist’s comments see here and for a nice collection comments head over to Geomblog. For a fare and balanced counter, see Pit of Bable) It was very interested to hear about the severity of the public image crisis the field of theoretical computer science currently finds itself facing. It is true that even within the broader computer science community, theory is oftentimes seen as not very important. But beyond this, the public’s perception of the theory of computation is very limited and a lot of the panel discussion focused on how to fix this. I mean, if I ask random people if they know anything in theoretical computer science, what are the chances that they will know anything? At best you might be very lucky and meet someone who has heard of the P versus NP question. On the other hand, mention physics to people, and they immediately think nuclear bombs, black holes, perhaps lasers and string theory, quantum mechanics, and atoms with electrons zipping around a nucleus (well I’m not saying any of these things are correct physics 😉 ) So how can theoretical computer science convey the excitement of the field to the general public?
Concrete suggestions like “publish in Science and Nature” and “get IEEE and ACM to hire someone to do PR” were offered up, and probably deserve serious consideration. It was interesting, to me, originally(?) a physicist, to hear how physics and astronomy were held up as prime examples of doing a good job conveying to the public the excitement of their research. It is true that physics and astronomy do a good job of conveying the excitement of the field, and there are various reasons for this.
But I’d like to focus on why theoretical physics has done a good job at exciting the public. Why? Because this is much closer to the theory of computation. Because lets face it: CS theory doesn’t have things dug up from the ground (dinosaurs, early primates, archeology), nor beautiful pictures of alien environments (planetary science, all of astronomy). And theoretical physics, like CS theory is hard. I mean really hard. And also, I would claim, theoretical physicis and CS theory share a very fundamental similarity in that they are both essentially about advanced creative problem solving.
So let’s look at theoretical physics. How is the excitement of theoretical physics conveyed? Well, the first things that probably pops into most peoples minds that is related to theoretical physics are Stephen Hawking’s “A Brief History of Time” or maybe Brian Green’s “The Elegant Universe.” Or maybe the recent NOVA program on “E=mc^2.” Now clearly neither of these books or the TV show is explaining to the public the “real” story of the theoretical physics. But what they do a good job of is convincing the audience that they, the audience, actually understand what is going on. As was mentioned on the panel tonight, people will hear about gravitons and think that they actually understand the physics of gravitons. But really, of course they don’t. Do they even know why exchange of particles can give rise to attractive forces or the connection of whether these forces are attractive or repulsive to the spin of the exchanged particle. I seriously doubt it. Yet, the authors of these books and TV shows have successfully given the audience the feeling that they do understand the field.
There are stories I have been told about Richard Feynman (OK, yeah, I just can’t resist another Feynman story) which said that when you went to one of his lectures, while you were listening to the lecture you would think “Yeah! This is great! Everything makes total sense now!” But when you left the lecture and tried to recall the reasoning and what Feynman was teaching, you couldn’t reproduce the results. I maintain that what was happening here is the same thing which good popular expositions on theoretical physics do: they convince the reader that they understand what is going on even though they most certainly do not. What is funny about the Feynman stories, of course, is that these are real physicists who themselves probably should understand the subject, and yet Feynman was able to convince them they understood what was going on, even though they really didn’t. Kind of scary.
So what lesson do I draw for this for the theory of CS? Well they need a Richard Feynman and a Stephen Hawking! But seriously, they need to attempt to convey their results in a way which, while not toally faithful to the science, gives the reader a reason to believe that they understand what is going on. This, of course, is much hard to do as a computer scientist than as a theoretical physicist because in the former rigor is held in higher esteme than in physics where hand-wavy arguments hold a much greater standing. But certainly even theoretical physicists (except the best) have to distort their understandings to meet the general public. So my advice to the theoretical computer science community is to let the rigor go but convey the spirit in a way that convince the public they understand what is going on.
Now one might argue that this is dishonest. And to the brighter readers it certainly is. But remember, it’s not the bright readers which are the main concern: they don’t consitute the public (if they did I wouldn’t be writing this post nor would FOCS be having this panel discussion. Nor would their be trials about whether intelligent design should be taught in science courses at the beginning of the twenty first century.)
Another interesting perspective coming from physics is that theoretical physics has conveyed to the public that it is really pursuing something fundamental. The two big fundamentals are “learning the laws of nature” and “understanding the origin of our universe.” CS theory hasn’t, in my opinion, exploited the fact that it is studying a fundamental question: the fundamental limits of computation. This fundamental research direction, to me, is as deep as understanding what the laws of nature are (and if you catch my on some days I might even say that one is deeper than the other. Which is deeper depends on the day. And perhaps the last time I’ve had a conversation with Scott Aaronson.) Certainly this is one of the reasons people get interested in the theory of computer science. I myself have very fond memories of reading “The Turing Omnibus” which introduced me at an early age to ideas of P versus NP, error correcting codes, the von Neuman architecture, the theory of computablity, etc. This excitement is as exciting as thinking about string theory, or supersymmetry, or solutions to Einstein’s equations. And it certainly, is a fundamental question, about our universe (I began my thesis at Berkeley with the sentence “Our generous universe comes equiped with the ability to compute.” Heh. Pretty cheesy, no?)
I think I could go on about this subject ad nauseum. So I’ll stop. And, of course, I’m more of an outsider than an insider here. This is my first FOCS. On the other hand, when one of the panelists asked how many had publised in Science or Nature, I was one of about four who got to raise their hand. And the paper even had some computer science (about universal quantum computers) in it! And remember, if you publish in Nature or Science, there is the possibility of being sucked into their press cabul and there will be articles about your work appearing simultaneously around the world’s newspapers.
A New Kind of Disclaimer
Cosma Shalizi has posted a review of Stephen Wolfram’s “A New Kind of Science.” It’s not often that you find a review which begins
Attention conservation notice: Once, I was one of the authors of a paper on cellular automata. Lawyers for Wolfram Research Inc. threatened to sue me, my co-authors and our employer, because one of our citations referred to a certain mathematical proof, and they claimed the existence of this proof was a trade secret of Wolfram Research. I am sorry to say that our employer knuckled under, and so did we, and we replaced that version of the paper with another, without the offending citation. I think my judgments on Wolfram and his works are accurate, but they’re not disinterested.
or a review that ends (well almost) with
This brings me to the core of what I dislike about Wolfram’s book. It is going to set the field back by years. On the one hand, scientists in other fields are going to think we’re all crackpots like him. On the other hand, we’re going to be deluged, again, with people who fall for this kind of nonsense. I expect to have to waste a lot of time in the next few years de-programming students who’ll have read A New Kind of Science before knowing any better.
2 -> 4
Via the amazing John Baez’s Week 222 of “This Week’s Finds in Mathematical Physics” I find this paper “Fractal Spacetime Structure in Asymptotically Safe Gravity” by O. Lauscher, M. Reuter:
Abstract:Four-dimensional Quantum Einstein Gravity (QEG) is likely to be an asymptotically safe theory which is applicable at arbitrarily small distance scales. On sub-Planckian distances it predicts that spacetime is a fractal with an effective dimensionality of 2. The original argument leading to this result was based upon the anomalous dimension of Newton’s constant. In the present paper we demonstrate that also the spectral dimension equals 2 microscopically, while it is equal to 4 on macroscopic scales. This result is an exact consequence of asymptotic safety and does not rely on any truncation. Contact is made with recent Monte Carlo simulations.
Hmm..Yet another paper pointing towards a spacetime which is two dimensional at microscopic scales and four dimensional at large scales. Of course I’m told that if they add matter to these theories all hell will break lose. It will be interesting to see what hell looks like.
Your Symmetry Broke My Quantum Computer?
An article in Scientific American (of all places….I stopped reading Scientific American when they started a section on science/pseudoscience. Sure I agree with them, but I don’t want to read a science magazine to read about how science is different from pseudoscience, I already know that. Plus they stopped the amateur science section and mathematical recreations section: really the two best reasons to read Scientific American in the good old days) on a mechanism for decoherence due to symmetry breaking.
Jeroen van den Brink and his colleagues at Leiden University in the Netherlands, however, suggest that even perfect isolation would not keep decoherence at bay. A process called spontaneous symmetry breaking will ruin the delicate state required for quantum computing. In the case of one proposed device based on superconducting quantum bits (qubits), they predict that this new source of decoherence would degrade the qubits after just a few seconds.
The paper in question, published in Physical Review Letters (and available as quant-ph/0408357cond-mat/0408357) presents an interesting mechanism for decoherence. What is most interesting about this decoherence mechanism is the rate they obtain for decoherence: [tex]$t_D={N h over k_B T}$[/tex], where N is the number of microscopic degress of freedom, and h, k_B, and T should be recognizable to every physicist 😉
What does this mean for quantum computers? Well the above might indicate that this is some fundamental limit for quantum computing (and in particular for superconducting implementations of quantum computers for which this result will hold). But I don’t think this is true. I’ll let the article explain why:
Not everyone agrees that the constraint of a few seconds is a serious obstacle for superconducting qubits. John Martinis of the University of California at Santa Barbara says that one second “is fine for us experimentalists, since I think other physics will limit us well before this timescale.” According to theorist Steven M. Girvin of Yale University, “if we could get a coherence time of one second for a superconducting qubit, that would mean that decoherence would probably not be a limitation at all.” That is because quantum error correction can overcome decoherence once the coherence time is long enough, Girvin argues. By running on batches of qubits that each last for only a second, a quantum computer as a whole could continue working indefinitely.
"Drawing Theories Apart : The Dispersion of Feynman Diagrams in Postwar Physics" by David Kaiser
Some of you have accused me of Feynman hero worship. To which I plead guilty, but with exceptions! I certainly admire the guy for certain qualities, but like everyone, he was human and so comes along with all the lovely faults and trivialities that make up our entertaining species. But on to the subject at hand: I just finished reading Drawing Theories Apart : The Dispersion of Feynman Diagrams in Postwar Physics by David Kaiser.
This book was once a thesis. Parts of it read like it was once a thesis. On the other hand, I really like reading theses. But still, there are times which I wish a little more editing had been done to lead to a more narrative tone.
That being said, I mostly recommend this book to the hard core affectionado of the early history of quantum field theory. But if you are such a rare beast (I suspect most physicsts are!), this book is very entertaining. The most interesting component of the first half of this book involves the “split” between Feynman and Dyson on their take on the diagrams (interestingly, early on, the diagrams were often referred to as Feynman-Dyson diagrams) and how this difference could be traced through the postdocs and graduate students who learned the techniques from either Feynman or Dyson. It is interesting how the rigor of Dyson and physical intuition of Feynman could be explicitly seen in how they drew the diagrams. Dyson would draw the diagrams always with right angles, clearly indicating that they were simply a tool for bookkeeping the perturbation theory. Feynman’s diagrams on the other hand, had tilted lines, much more suggestive of the path integral formulation of quantum theory which Feynman had in mind in coming up with the rules for the diagrams.
The second half of the book is dedicated to a study of Geoffrey Chew and his idea of nuclear democracy. I certainly wish that this part of the book had more details, as this story is fascinating, but on the whole the book gives a nice introduction to the S-matrix dispersion tools and the basic ideas of the bootstrap and looks at how diagramatic methods played a role in this work (no longer really Feynman diagrams.) Interestingly I learned that Chew was probably the first professor to resign in protest over the University of California’s requirement of an anti-communist oath. Good for Chew.
Letters in the Sky with Dialogue
Steve Hsu is at it again with an interesting paper, this time with Anthoy Zee (UCSB). And this one has to be read to be believed: physics/0510102:
Message in the Sky
Authors: S. Hsu, A. Zee
Comments: 3 pages, revtex
Subj-class: Popular Physics
We argue that the cosmic microwave background (CMB) provides a stupendous opportunity for the Creator of our universe (assuming one exists) to have sent a message to its occupants, using known physics. The medium for the message is unique. We elaborate on this observation, noting that it requires only careful adjustment of the fundamental Lagrangian, but no direct intervention in the subsequent evolution of the universe.
I especially like the last paragraph:
In conclusion, we believe that we have raised an intriguing possibility: a universal message might be encoded in the cosmic background. When more accurate CMB data becomes available, we urge that it be analyzed carefully for possible patterns. This may be even more fun than SETI.
Holography Oversold?
Warning: this post is about a subject I know a tiny tiny bit about. I suspect I will have to update it once I get irrate emails pointing out my horrible misunderstandings.
Roman Buniy and Stephen Hsu (both from the University of Oregon…quack, quack…the mascot of UofO is the Duck!) cross listed an interesting paper to quant-ph today: hep-th/0510021: “Entanglement entropy, black holes and holography.” (Steve posted about it on his blog) As many of you know, the idea of holography is that the number of degrees of freedom of a region of our universe scales proportional to the area of surface of the region. This strange conjecture is extremely interesting, and bizarre, because it raises all sorts of questions about how such theories work (I especially have problems thinking about locality in such theories, but hey that’s just me.) One line of evidence for the holographic principle comes from black hole physics. One can formulate a thermodynamics for black holes, and this thermodynamics gives an entropy for a black hole which is proportional to its area. Another interesting fact is the AdS/CFT correspondence which shows an equivalence between a certain quantum gravity theory in an anti-deSitter universe and a conformal field theory on the boundary of this space: i.e. quantum gravity in this space can be described by a theory on the surface of the space, a holographic theory, so to speak. Indeed, the fact that certain string theories have black holes which have a holographic number of degrees of freedom is taken as evidence that string theory might be consistent with our universe.
What Buniy and Hsu suggest in their paper is that the holographic bound is not a bound on the degrees of freedom of our theory of the universe, but that instead, the holographic bound should be thought about as a bound on entropy of a region in the presence of gravity. They point out that if you take gravity away, then the scaling of the degrees of freedom scales like the volume (although, if you take the ground state of a local quantum field theory, then this particular state has an entropy which scales like the area: such states that Buniy and Hsu consider are therefore necessarily not ground states of such theories. But this doesn’t mean that they don’t exist or that we can’t construct such states.) They then argue that if, on the other hand, you want to avoid gravitational collapse, then this requirement precludes such states, and indeed gives you states whose entropy scales like the area. What Buniy and Hsu seem to be arguing is that while one does obtain entropies which scale like the area using these arguments about black holes, this doesn’t imply that the degrees of freedom of the underlying theory must scale as the area.
One might wonder whether there is a difference between having an entropy scaling like the area and the degrees of freedom scaling like the area. Well certainly there would be for an underlying theory of quantum gravity: presumably different degrees of freedom can be accessed which give the same area scaling, but which represent fundamentally different physical settings. So, for example, I can access some of these degrees of freedom, and as long as I don’t create a black hole, these degrees will be as real for me as they can be. But if I try to access them in such a manner that I create a black hole, I will only see the effective degrees of freedom proportional to the area of the black hole.
Which is all very interesting. Just think, maybe one of the greatest achievements of string theory, deriving holographic bounds, actually ends up being a step in the wrong direction. And, no I’m not wishing this fate upon string theory. I wish no fate among any theories: I just want to understand what nature’s solution is.
Nature Physics
The first issue of Nature Physics is out (which was where the article by Brassard on information and quantum theory appeared.) From the opening letter from the editor
Authors may be pleased to know that manuscripts can be submitted to Nature Physics not only in Microsoft Word, but in LaTeX too.
Which caused me to almost fall out of my chair laughing. Welcome to the modern world, Nature publishing.
Super Solid Helium?
Yesterday I went to a condensed matter seminar on “super solid Helium” by Greg Dash. What, you ask, is super solid Helium? Well certainly you may have heard of superfluids. When you take Helium 4, and cool it down, somewhere around 2 Kelivin the liquid He4 (assuming the pressure is not too high so that it does not solidify) makes a transition to a state of matter, the superfluid, where it is a liquid which doesn’t have any viscousity. Well actually I think what happens is that you get a state of matter which has one part which is superfluid and the other part which is normal. The superfluid part of the liquid, along with having no viscosity, also has infinite thermal conductivity so it’s impossible to set up a temperature gradiant with a superfluid. He3 also froms a superfluid at cold enough temperatures. But He3 does this at a much lower temperature, I believe around a few micro Kelvin. The mechanisms for superfluidity in these systems is different: in He4 it is bose condensation of the He4’s themselves (which are bosons) and in He3 it is a bose condensation of the He3 which act as composite particles with bose statistics (the mechanism is similar to the role Cooper pairs play in superconductivity.)
So what is super solid Helium? Well the theoretical conjecture is that if you take a solid, this solid has vacancies (i.e. it’s not perfect and there are places where the solid is missing atoms in the lattice where it should have atoms in the lattice.) and it is these vacancies which can form a bose condensate at low enough temperature. So the idea behind super solid Helium is that you have a highly pressurized chunck of cold Helium and below a certain temperature vacancies in the solid will all condense into the same state. Thus in such a substance the vacancies should flow without resistance in the solid. (I say Helium here, but it could possibly occur in other substances as well.)
But the question is, does such a mechanism occur? Over the years, various experiments have been performed looking for super solids and no one has seen any evidence of this strange phase of matter. Well in 2004, Eun-Seong Kim and Moses Chan of Pennsylvania State University performed experiments in which they claimed to have observed super solid Helium.
The basic idea behind the experiment is pretty simple: if you take a superfluid and try to spin it, it will be much easier to spin because of the lack of viscosity of the fluid. Thus if you take a torsional pendulum (a pendulum which instead of swinging back and forth like a normal pendulum is a disk attached to a rod and the disk is rotated by an angle and then this rotation angle oscillates like a pendulum) and start it oscillating, and then cool the system from above the superfluid transition temperature to below the superfluid transition temperature, then the system will all of a sudden become easier to spin, i.e. it’s moment of intertia will decrease. This will result in an increase in the oscillation frequency of the torsion pendulum. So what Kim and Chan did was they got highly pressurized Helium, so that it was solid, and put it on such a torsional pendulum. And at around a one tenth of a Kelvin, Kim and Chan observed exactly the effect of a decrease in the moment of inertia!
Now of course, science isn’t just about one team observing something and then everybody sitting back and saying “yes that must be super solid Helium.” Instead what happens is that (1) theorists get all up in arms trying to figure out if there are alternative explanations for this experiment and begin thinking about how to test these explanations and (2) experimentalists design experiments to duplicate or to make complementary confirmations of different properties super solid Helium would exhibit. The talk I went to yesterday was about some of the theoretical ideas for alternative explanations of the results of Kim and Chan as well as some discussion of more recent results reported by Kim and Chan. Interesting, I’d say that right now there is a stalemate: the alternative explanations now have problems with explaining the experimental results, but new, more recent experiments also exhibit effects which are harder to fit with what the theory of super solid Helium would predict (in particular an experiment I did not understand very well which attempted to verify in a different manner the existence of the super solid phase ( i.e. one of those complementary confirmations seemed to fail.)) What was nice to hear was that a different experimental group was gearing up to repeat the experiment of Kim and Chan. So maybe soon we will have either a confirmation of the effect seen by Kim and Chan, or no confirmation and then trying to figure out what the heck is causing the effect seen by Kim and Chan.
Science in action. Ain’t it beautiful?
Is Geometry the Key?
Gilles Brassard has a very nice article in the Commentary section of Nature, “Is information the key?” From the abstract:
Quantum information science has brought us novel means of calculation and communication. But could its theorems hold the key to understanding the quantum world at its most profound level? Do the truly fundamental laws of nature concern — not waves and particles — but information?
The article is well worth reading.
The basic question asked in the paper is whether or not it is possible to derive quantum theory from basic rules about information plus a little bit more. Thus for instance, one can ask, whether it is possible to derive quantum theory by assuming things like no superluminal communication plus no bit commitment (two properties of quantum theory as we understand it today.) To date, there have been some very nice attempts to move this task forward. In particular Brassard mentions the work of Bub, Clifton and Halvorson which is very nice. However, my beef with all such derviations I’ve seen so far is that their assumptions are too strong. For example in the Bub et al work, they assume theory must be described within a C*-algebraic framework. And this assumption just hides too much for me: such assumptions basically are assumption of the linearity of the theory and don’t really shed light on why quantum theory should act in this manner. Linearity, for me, is basically the question “why amplitudes and not probabilities?” This, I find, is a central quantum mystery (well not a mystery, but something I’d like to see a reason given for in the same way that if I had been around in 1900, I would have wanted to see an explanation for the Lorentz transform, which is what Einstein, so beautifully, did.) On the other hand, the fact that one can make these assumptions and derive quantum theory or quantum-like behavior is extremely suggestive and I would be happy if lots of people started thinking about this question.
Personally, I’ve already gone through a stage where I thought this might be the approach to undestanding quantum theory and moved to another stage (just as I have, throughout my life, loved every single interprtation. I even have a strong memory of sitting in a car on a vacation with my family, probably when I was in high school, where I distinctly remember understanding the many-worlds interpretation of quantum theory! On the other hand, I also have a vivid memory of waking up one night while I was an undergrad at Caltech and having an absolutely impossible to disprove reason for why there is evil in the world. Strange feelings, those. “The moment of clarify faded like charity does.”) In work I did with Ben Toner, we showed a protocol for simulating the correlations produced by projective measurements on a singlet using shared randomness and a single bit of communication. For a long time, Ben and I wondered whether we could derive this protocol via more basic assumptions. For example, is there a simple game for which the protocol with one bit of communication is the best strategy (this game also being best solved by bare, unaided with communication, quantum theory?) Of course, one can always define a game such that these correlations and the protocol are the best, but that is cheating: we wanted a simple game to go along with our simple protocol. Alas we could never find such a game or a more basic set of principles from which to derive our protocol. As a funny note, we called the game we were searching for “Warcraft.” And when we were looking for a game which the optimal strategy would yeild all of quantum theory we called it simply “The Game.” What is “The Game” at which quantum theory is the optimal strategy?
After working on “The Game” and related ideas, I’ve become less convinced that this is the proper approach to take. Why? Well mostly due to the structure of the protocol we came up with for simulating the singlet quantum correlations. The important observation, I now believe, about this protocol is its geometric nature. If you look at the protocol (quant-ph/0304076) what is interesting about it, to me, is the beautiful geometry of the protocol. I should mention that recently an reformulation of our protocol has appeared quant-ph/0507120 by Degorre, Laplante, and Roland which is very nice and also demonstrates how simple the geometry involved in the protocol is. So why do I focus on this geometric aspect? Well because I think that the ultimate explanation for why quantum theory is the way it is must, in some way provide answers to the question of hidden variables in quantum theory (prejudice number 1) and that the only way in which I know to get around such obstacles is to muck with the topology of spacetime (prejudice number 2), and thus an attempt to understand our protocol in terms of changes in the topology of spacetime is the proper route to take. However, I haven’t yet succeeded in recasting our protocol in these terms. Perhaps some crazy wonderkid out there can easily see how to do this! (Alternatively I wouldn’t be surprised if my prejudice number two is somehow replaced with particularly strange reasoning about the nature of time. Here I am thinking somewhat of the transactional interpretations of quantum theory.)
Anyway, understanding why quantum theory is the way it is, is either one of the greatest mysteries of physics, or a dead end that will never lead to any experiments. I hope for the former, but, can’t quite convince myself that it can’t be the latter.