The Gospel of Theoretical Computer Science

I’m in Pittsburgh this weekend for FOCS 2005. This is the first FOCS I’ve attended.
This evening there was a panel discussion at the business meeting (free beer!) on “Exciting the public about (theoretical) Computer Science.” (Update: for a detailed description of the business meeting see Rocco Servedio’s guest post at Lance Fortnow’s blog and for a real computer scientist’s comments see here and for a nice collection comments head over to Geomblog. For a fare and balanced counter, see Pit of Bable) It was very interested to hear about the severity of the public image crisis the field of theoretical computer science currently finds itself facing. It is true that even within the broader computer science community, theory is oftentimes seen as not very important. But beyond this, the public’s perception of the theory of computation is very limited and a lot of the panel discussion focused on how to fix this. I mean, if I ask random people if they know anything in theoretical computer science, what are the chances that they will know anything? At best you might be very lucky and meet someone who has heard of the P versus NP question. On the other hand, mention physics to people, and they immediately think nuclear bombs, black holes, perhaps lasers and string theory, quantum mechanics, and atoms with electrons zipping around a nucleus (well I’m not saying any of these things are correct physics 😉 ) So how can theoretical computer science convey the excitement of the field to the general public?
Concrete suggestions like “publish in Science and Nature” and “get IEEE and ACM to hire someone to do PR” were offered up, and probably deserve serious consideration. It was interesting, to me, originally(?) a physicist, to hear how physics and astronomy were held up as prime examples of doing a good job conveying to the public the excitement of their research. It is true that physics and astronomy do a good job of conveying the excitement of the field, and there are various reasons for this.
But I’d like to focus on why theoretical physics has done a good job at exciting the public. Why? Because this is much closer to the theory of computation. Because lets face it: CS theory doesn’t have things dug up from the ground (dinosaurs, early primates, archeology), nor beautiful pictures of alien environments (planetary science, all of astronomy). And theoretical physics, like CS theory is hard. I mean really hard. And also, I would claim, theoretical physicis and CS theory share a very fundamental similarity in that they are both essentially about advanced creative problem solving.
So let’s look at theoretical physics. How is the excitement of theoretical physics conveyed? Well, the first things that probably pops into most peoples minds that is related to theoretical physics are Stephen Hawking’s “A Brief History of Time” or maybe Brian Green’s “The Elegant Universe.” Or maybe the recent NOVA program on “E=mc^2.” Now clearly neither of these books or the TV show is explaining to the public the “real” story of the theoretical physics. But what they do a good job of is convincing the audience that they, the audience, actually understand what is going on. As was mentioned on the panel tonight, people will hear about gravitons and think that they actually understand the physics of gravitons. But really, of course they don’t. Do they even know why exchange of particles can give rise to attractive forces or the connection of whether these forces are attractive or repulsive to the spin of the exchanged particle. I seriously doubt it. Yet, the authors of these books and TV shows have successfully given the audience the feeling that they do understand the field.
There are stories I have been told about Richard Feynman (OK, yeah, I just can’t resist another Feynman story) which said that when you went to one of his lectures, while you were listening to the lecture you would think “Yeah! This is great! Everything makes total sense now!” But when you left the lecture and tried to recall the reasoning and what Feynman was teaching, you couldn’t reproduce the results. I maintain that what was happening here is the same thing which good popular expositions on theoretical physics do: they convince the reader that they understand what is going on even though they most certainly do not. What is funny about the Feynman stories, of course, is that these are real physicists who themselves probably should understand the subject, and yet Feynman was able to convince them they understood what was going on, even though they really didn’t. Kind of scary.
So what lesson do I draw for this for the theory of CS? Well they need a Richard Feynman and a Stephen Hawking! But seriously, they need to attempt to convey their results in a way which, while not toally faithful to the science, gives the reader a reason to believe that they understand what is going on. This, of course, is much hard to do as a computer scientist than as a theoretical physicist because in the former rigor is held in higher esteme than in physics where hand-wavy arguments hold a much greater standing. But certainly even theoretical physicists (except the best) have to distort their understandings to meet the general public. So my advice to the theoretical computer science community is to let the rigor go but convey the spirit in a way that convince the public they understand what is going on.
Now one might argue that this is dishonest. And to the brighter readers it certainly is. But remember, it’s not the bright readers which are the main concern: they don’t consitute the public (if they did I wouldn’t be writing this post nor would FOCS be having this panel discussion. Nor would their be trials about whether intelligent design should be taught in science courses at the beginning of the twenty first century.)
Another interesting perspective coming from physics is that theoretical physics has conveyed to the public that it is really pursuing something fundamental. The two big fundamentals are “learning the laws of nature” and “understanding the origin of our universe.” CS theory hasn’t, in my opinion, exploited the fact that it is studying a fundamental question: the fundamental limits of computation. This fundamental research direction, to me, is as deep as understanding what the laws of nature are (and if you catch my on some days I might even say that one is deeper than the other. Which is deeper depends on the day. And perhaps the last time I’ve had a conversation with Scott Aaronson.) Certainly this is one of the reasons people get interested in the theory of computer science. I myself have very fond memories of reading “The Turing Omnibus” which introduced me at an early age to ideas of P versus NP, error correcting codes, the von Neuman architecture, the theory of computablity, etc. This excitement is as exciting as thinking about string theory, or supersymmetry, or solutions to Einstein’s equations. And it certainly, is a fundamental question, about our universe (I began my thesis at Berkeley with the sentence “Our generous universe comes equiped with the ability to compute.” Heh. Pretty cheesy, no?)
I think I could go on about this subject ad nauseum. So I’ll stop. And, of course, I’m more of an outsider than an insider here. This is my first FOCS. On the other hand, when one of the panelists asked how many had publised in Science or Nature, I was one of about four who got to raise their hand. And the paper even had some computer science (about universal quantum computers) in it! And remember, if you publish in Nature or Science, there is the possibility of being sucked into their press cabul and there will be articles about your work appearing simultaneously around the world’s newspapers.

A New Kind of Disclaimer

Cosma Shalizi has posted a review of Stephen Wolfram’s “A New Kind of Science.” It’s not often that you find a review which begins

Attention conservation notice: Once, I was one of the authors of a paper on cellular automata. Lawyers for Wolfram Research Inc. threatened to sue me, my co-authors and our employer, because one of our citations referred to a certain mathematical proof, and they claimed the existence of this proof was a trade secret of Wolfram Research. I am sorry to say that our employer knuckled under, and so did we, and we replaced that version of the paper with another, without the offending citation. I think my judgments on Wolfram and his works are accurate, but they’re not disinterested.

or a review that ends (well almost) with

This brings me to the core of what I dislike about Wolfram’s book. It is going to set the field back by years. On the one hand, scientists in other fields are going to think we’re all crackpots like him. On the other hand, we’re going to be deluged, again, with people who fall for this kind of nonsense. I expect to have to waste a lot of time in the next few years de-programming students who’ll have read A New Kind of Science before knowing any better.

My Printer Turned Me In

Via Hogg’s Universe, this is freaky:

It sounds like a conspiracy theory, but it isn’t. The pages coming out of your color printer may contain hidden information that could be used to track you down if you ever cross the U.S. government.
Last year, an article in PC World magazine pointed out that printouts from many color laser printers contained yellow dots scattered across the page, viewable only with a special kind of flashlight. The article quoted a senior researcher at Xerox Corp. as saying the dots contain information useful to law-enforcement authorities, a secret digital “license tag” for tracking down criminals.
The content of the coded information was supposed to be a secret, available only to agencies looking for counterfeiters who use color printers.
Now, the secret is out.
Yesterday, the Electronic Frontier Foundation, a San Francisco consumer privacy group, said it had cracked the code used in a widely used line of Xerox printers, an invisible bar code of sorts that contains the serial number of the printer as well as the date and time a document was printed.

Holy conspiracy theory, batman!
When my uncle worked for IBM, he told me one of the intelligence agencies used to visit from time to time and “suggest” that they use a particular cryptosystem. He said the experts at IBM would look it over and were always very suspicious of the system proposed by the agency, so they never used them. This always seemed like a silly approach to me. Much better would be to get one of your supersmart mathematicians hired at IBM and then use this insider to convince the company to use this cryptosystem. Well maybe you would need more than one supersmart mathematician, but still…

s, p, d, f

From Nature Physics, f-wave Superconductors?:

NaxCoO2yH2O must be hydrated to superconduct1, and its triangular CoO2 layers provide an intriguing contrast with the square CuO2 layers of the high-temperature superconductors. Its superconductivity is often assumed to be unconventional in that the Cooper pairs are not in a spin singlet state with s-wave symmetry, as with conventional superconductors. According to the Pauli exclusion principle, pairs with a singlet (triplet) spin part have a corresponding even (odd) spatial part, designated as s-, p-, d- or f-wave pairing in accordance with the pair angular momentum. However, in NaxCoO2yH2O, experimental reports are often contradictory2, 3, 4, 5, 6 and solid evidence for any particular pairing state remains lacking. This has led to an unprecedented number of proposals2, 3, 5, 7, 8, 9, 10, 11, 12, 13 for the pairing symmetry (perhaps the greatest number ever for a single compound), each in agreement with some subset of the available data. Here we test each of the 25 symmetry-allowed pairing states14 against firmly established properties of the compound. Surprisingly, this eliminates most possible pairings. The two remaining states both have f-wave symmetry, suggesting that NaxCoO2yH2O may be the most exotic superconductor discovered so far. We discuss expected features of these states and suggest experiments to distinguish between them.

2 -> 4

Via the amazing John Baez’s Week 222 of “This Week’s Finds in Mathematical Physics” I find this paper “Fractal Spacetime Structure in Asymptotically Safe Gravity” by O. Lauscher, M. Reuter:

Abstract:Four-dimensional Quantum Einstein Gravity (QEG) is likely to be an asymptotically safe theory which is applicable at arbitrarily small distance scales. On sub-Planckian distances it predicts that spacetime is a fractal with an effective dimensionality of 2. The original argument leading to this result was based upon the anomalous dimension of Newton’s constant. In the present paper we demonstrate that also the spectral dimension equals 2 microscopically, while it is equal to 4 on macroscopic scales. This result is an exact consequence of asymptotic safety and does not rely on any truncation. Contact is made with recent Monte Carlo simulations.

Hmm..Yet another paper pointing towards a spacetime which is two dimensional at microscopic scales and four dimensional at large scales. Of course I’m told that if they add matter to these theories all hell will break lose. It will be interesting to see what hell looks like.

Your Symmetry Broke My Quantum Computer?

An article in Scientific American (of all places….I stopped reading Scientific American when they started a section on science/pseudoscience. Sure I agree with them, but I don’t want to read a science magazine to read about how science is different from pseudoscience, I already know that. Plus they stopped the amateur science section and mathematical recreations section: really the two best reasons to read Scientific American in the good old days) on a mechanism for decoherence due to symmetry breaking.

Jeroen van den Brink and his colleagues at Leiden University in the Netherlands, however, suggest that even perfect isolation would not keep decoherence at bay. A process called spontaneous symmetry breaking will ruin the delicate state required for quantum computing. In the case of one proposed device based on superconducting quantum bits (qubits), they predict that this new source of decoherence would degrade the qubits after just a few seconds.

The paper in question, published in Physical Review Letters (and available as quant-ph/0408357cond-mat/0408357) presents an interesting mechanism for decoherence. What is most interesting about this decoherence mechanism is the rate they obtain for decoherence: [tex]$t_D={N h over k_B T}$[/tex], where N is the number of microscopic degress of freedom, and h, k_B, and T should be recognizable to every physicist 😉
What does this mean for quantum computers? Well the above might indicate that this is some fundamental limit for quantum computing (and in particular for superconducting implementations of quantum computers for which this result will hold). But I don’t think this is true. I’ll let the article explain why:

Not everyone agrees that the constraint of a few seconds is a serious obstacle for superconducting qubits. John Martinis of the University of California at Santa Barbara says that one second “is fine for us experimentalists, since I think other physics will limit us well before this timescale.” According to theorist Steven M. Girvin of Yale University, “if we could get a coherence time of one second for a superconducting qubit, that would mean that decoherence would probably not be a limitation at all.” That is because quantum error correction can overcome decoherence once the coherence time is long enough, Girvin argues. By running on batches of qubits that each last for only a second, a quantum computer as a whole could continue working indefinitely.

The Publishing Divide

A new LA Times article about the controversy over the discovery of a possible tenth planet. As those of you who read the original article know, the controversy arose when Michael Brown from Caltech was scooped in the discovery of this object and then later noticed that the “scoop”ers, led by Professor Jose Luis Ortiz, had visited a website which contained information about where Brown and coworkers were pointing their telescope. Interestingly, here is what Professor Ortiz had to say:

Ortiz argues he has done nothing wrong, and the data he found using the Google search engine should be considered public and thus free to use.
“If … somebody uses Google to find publicly available information on the Internet and Google directs to a public Web page, that is perfectly legitimate,” Ortiz wrote in an e-mail to The Times. “That is no hacking or spying or anything similar.”

Interesting line of reasoning. Of course the real problem is not that someone accessed the public information but that no acknowledgement of this access to this information was made in Professor Ortiz and coworkers communications. One thought I had (yeah sometimes I have those things!) was whether the fact that this information was publicly accessible mean that this information was “published” in the same way that preprints are “published” or information submitted to a database is “published?” Well probably not, but it seems we could probably construct an instance which is even closer to this “published” line. Consider that a challenge 😉

Math Doh!

A biophysicalchemist sends me the following link from the San Francisco Chronicle Cal math lecture to feature binge-eating cartoon dad detailing a math program this Sunday at 2 p.m. at the MSRI in Berkeley on the Simpsons and math (I lived almost level with the MSRI in the Berkeley hills while I was a graduate student. Oh what a view!) Sounds like fun.
What is really funny, however, is what we find as examples of math and the Simpson’s being used together from the article:

Homer, in a dream, wrote that 1,782 to the 12th power plus 1,841 to the 12th power equals 1,922 to the 12th power. (It does.)

First of all, it wasn’t a dream. Homer had slipped into….the third dimension (which we define as that place where frinkahedrons exist, of course.) And second, well if what the author states is true, then Princeton mathematician Andrew Wiles would have quite a bit of egg in his face. Because the above statement would be a counterexample to Fermat’s Last Theorem (not to be confused with the other important FLT: Fermat’s Little Theorem.) which Andrew Wiles famously proved (well proved, and then they found an error, and then he fixed the error. Genius!) The statement of Fermat’s Last Theorem is that there are no positive integers, x, y, and z, such that x^n+y^n=z^n for n an integer greater than two. What is funny is that if you evaluate the two sides of this equation, they do agree in the first nine most significant digits:

1782^12+1841^12=25412102586…
1922^12=25412102593…

So if you type this equation into a caculator which only keeps ten digits of precision, it will fail (rounding that last digit to the same number, I think. For the actual program used to find this violation see here.) So it seems as if this joke, a “calculator significant digit” violation of Fermats Last Theorem, has caught its first victim. I’ll bet the journalist involved did exactly that: he just plugged it into his handy calculator! Well, maybe not, but still it’s pretty funny to think that this might have occured.

Math and Science Erosion

The New York Times has an article today entitled “Top Advisory Panel Warns of an Erosion of the U.S. Competitive Edge in Science” discussing a report issued by the National Academies concerning the scientific competitiveness of the United States. Here is an interesting fact

Last year, more than 600,000 engineers graduated from institutions of higher education in China, compared to 350,000 in India and 70,000 in the United States.

Funny, when I read this, the first thought which comes to my mind is that the competitiveness for engineering jobs in China must be huge!
I always have a big mixed bag of emotions when I read articles like this. On the one hand, like most scientist, I tend to think that science and research are underfunded. Funding as a percent of GDP is about half what it was in the 60s. On the other hand, I tend to see the increase in funding by other countries in a postive light: that other governments are realizing they need to spend more on science and research is good for the researchers in those country and also good for the world (of course global inequities mean this good is diluted as a function of distance down the first to second to third world ladder.)
What has certainly been true over the last fifty years is that the U.S. has built up an incredible system of higher education (seventy percent of Nobel prize winners work in U.S. universities, as one silly example. We spend about twice as much as western Europe on higher education per student, as another example.) But do I begrudge the rest of the world similar top universities? That doesn’t seem right. On the other hand, when I see destructive factors at work in the U.S. university system (as for example is occuring because of perceived (and actual 🙁 ) hostility towards foreign graduate students) this doesn’t make me happy.
So sometimes it’s hard to keep up a gloomy face: behind all of the rhetoric, I see the world progressing at an increasing rate, which, I believe is a good thing. I guess I just won’t be good for producing a report like this one, because I’d focus almost exclusively on the negatives of the U.S. system and little on the postives of the other nations progress (except as an example.)