2 -> 4

Via the amazing John Baez’s Week 222 of “This Week’s Finds in Mathematical Physics” I find this paper “Fractal Spacetime Structure in Asymptotically Safe Gravity” by O. Lauscher, M. Reuter:

Abstract:Four-dimensional Quantum Einstein Gravity (QEG) is likely to be an asymptotically safe theory which is applicable at arbitrarily small distance scales. On sub-Planckian distances it predicts that spacetime is a fractal with an effective dimensionality of 2. The original argument leading to this result was based upon the anomalous dimension of Newton’s constant. In the present paper we demonstrate that also the spectral dimension equals 2 microscopically, while it is equal to 4 on macroscopic scales. This result is an exact consequence of asymptotic safety and does not rely on any truncation. Contact is made with recent Monte Carlo simulations.

Hmm..Yet another paper pointing towards a spacetime which is two dimensional at microscopic scales and four dimensional at large scales. Of course I’m told that if they add matter to these theories all hell will break lose. It will be interesting to see what hell looks like.

Your Symmetry Broke My Quantum Computer?

An article in Scientific American (of all places….I stopped reading Scientific American when they started a section on science/pseudoscience. Sure I agree with them, but I don’t want to read a science magazine to read about how science is different from pseudoscience, I already know that. Plus they stopped the amateur science section and mathematical recreations section: really the two best reasons to read Scientific American in the good old days) on a mechanism for decoherence due to symmetry breaking.

Jeroen van den Brink and his colleagues at Leiden University in the Netherlands, however, suggest that even perfect isolation would not keep decoherence at bay. A process called spontaneous symmetry breaking will ruin the delicate state required for quantum computing. In the case of one proposed device based on superconducting quantum bits (qubits), they predict that this new source of decoherence would degrade the qubits after just a few seconds.

The paper in question, published in Physical Review Letters (and available as quant-ph/0408357cond-mat/0408357) presents an interesting mechanism for decoherence. What is most interesting about this decoherence mechanism is the rate they obtain for decoherence: [tex]$t_D={N h over k_B T}$[/tex], where N is the number of microscopic degress of freedom, and h, k_B, and T should be recognizable to every physicist 😉
What does this mean for quantum computers? Well the above might indicate that this is some fundamental limit for quantum computing (and in particular for superconducting implementations of quantum computers for which this result will hold). But I don’t think this is true. I’ll let the article explain why:

Not everyone agrees that the constraint of a few seconds is a serious obstacle for superconducting qubits. John Martinis of the University of California at Santa Barbara says that one second “is fine for us experimentalists, since I think other physics will limit us well before this timescale.” According to theorist Steven M. Girvin of Yale University, “if we could get a coherence time of one second for a superconducting qubit, that would mean that decoherence would probably not be a limitation at all.” That is because quantum error correction can overcome decoherence once the coherence time is long enough, Girvin argues. By running on batches of qubits that each last for only a second, a quantum computer as a whole could continue working indefinitely.

The Publishing Divide

A new LA Times article about the controversy over the discovery of a possible tenth planet. As those of you who read the original article know, the controversy arose when Michael Brown from Caltech was scooped in the discovery of this object and then later noticed that the “scoop”ers, led by Professor Jose Luis Ortiz, had visited a website which contained information about where Brown and coworkers were pointing their telescope. Interestingly, here is what Professor Ortiz had to say:

Ortiz argues he has done nothing wrong, and the data he found using the Google search engine should be considered public and thus free to use.
“If … somebody uses Google to find publicly available information on the Internet and Google directs to a public Web page, that is perfectly legitimate,” Ortiz wrote in an e-mail to The Times. “That is no hacking or spying or anything similar.”

Interesting line of reasoning. Of course the real problem is not that someone accessed the public information but that no acknowledgement of this access to this information was made in Professor Ortiz and coworkers communications. One thought I had (yeah sometimes I have those things!) was whether the fact that this information was publicly accessible mean that this information was “published” in the same way that preprints are “published” or information submitted to a database is “published?” Well probably not, but it seems we could probably construct an instance which is even closer to this “published” line. Consider that a challenge 😉

Math Doh!

A biophysicalchemist sends me the following link from the San Francisco Chronicle Cal math lecture to feature binge-eating cartoon dad detailing a math program this Sunday at 2 p.m. at the MSRI in Berkeley on the Simpsons and math (I lived almost level with the MSRI in the Berkeley hills while I was a graduate student. Oh what a view!) Sounds like fun.
What is really funny, however, is what we find as examples of math and the Simpson’s being used together from the article:

Homer, in a dream, wrote that 1,782 to the 12th power plus 1,841 to the 12th power equals 1,922 to the 12th power. (It does.)

First of all, it wasn’t a dream. Homer had slipped into….the third dimension (which we define as that place where frinkahedrons exist, of course.) And second, well if what the author states is true, then Princeton mathematician Andrew Wiles would have quite a bit of egg in his face. Because the above statement would be a counterexample to Fermat’s Last Theorem (not to be confused with the other important FLT: Fermat’s Little Theorem.) which Andrew Wiles famously proved (well proved, and then they found an error, and then he fixed the error. Genius!) The statement of Fermat’s Last Theorem is that there are no positive integers, x, y, and z, such that x^n+y^n=z^n for n an integer greater than two. What is funny is that if you evaluate the two sides of this equation, they do agree in the first nine most significant digits:

1782^12+1841^12=25412102586…
1922^12=25412102593…

So if you type this equation into a caculator which only keeps ten digits of precision, it will fail (rounding that last digit to the same number, I think. For the actual program used to find this violation see here.) So it seems as if this joke, a “calculator significant digit” violation of Fermats Last Theorem, has caught its first victim. I’ll bet the journalist involved did exactly that: he just plugged it into his handy calculator! Well, maybe not, but still it’s pretty funny to think that this might have occured.

Math and Science Erosion

The New York Times has an article today entitled “Top Advisory Panel Warns of an Erosion of the U.S. Competitive Edge in Science” discussing a report issued by the National Academies concerning the scientific competitiveness of the United States. Here is an interesting fact

Last year, more than 600,000 engineers graduated from institutions of higher education in China, compared to 350,000 in India and 70,000 in the United States.

Funny, when I read this, the first thought which comes to my mind is that the competitiveness for engineering jobs in China must be huge!
I always have a big mixed bag of emotions when I read articles like this. On the one hand, like most scientist, I tend to think that science and research are underfunded. Funding as a percent of GDP is about half what it was in the 60s. On the other hand, I tend to see the increase in funding by other countries in a postive light: that other governments are realizing they need to spend more on science and research is good for the researchers in those country and also good for the world (of course global inequities mean this good is diluted as a function of distance down the first to second to third world ladder.)
What has certainly been true over the last fifty years is that the U.S. has built up an incredible system of higher education (seventy percent of Nobel prize winners work in U.S. universities, as one silly example. We spend about twice as much as western Europe on higher education per student, as another example.) But do I begrudge the rest of the world similar top universities? That doesn’t seem right. On the other hand, when I see destructive factors at work in the U.S. university system (as for example is occuring because of perceived (and actual 🙁 ) hostility towards foreign graduate students) this doesn’t make me happy.
So sometimes it’s hard to keep up a gloomy face: behind all of the rhetoric, I see the world progressing at an increasing rate, which, I believe is a good thing. I guess I just won’t be good for producing a report like this one, because I’d focus almost exclusively on the negatives of the U.S. system and little on the postives of the other nations progress (except as an example.)

"Drawing Theories Apart : The Dispersion of Feynman Diagrams in Postwar Physics" by David Kaiser

Some of you have accused me of Feynman hero worship. To which I plead guilty, but with exceptions! I certainly admire the guy for certain qualities, but like everyone, he was human and so comes along with all the lovely faults and trivialities that make up our entertaining species. But on to the subject at hand: I just finished reading Drawing Theories Apart : The Dispersion of Feynman Diagrams in Postwar Physics by David Kaiser.
This book was once a thesis. Parts of it read like it was once a thesis. On the other hand, I really like reading theses. But still, there are times which I wish a little more editing had been done to lead to a more narrative tone.
That being said, I mostly recommend this book to the hard core affectionado of the early history of quantum field theory. But if you are such a rare beast (I suspect most physicsts are!), this book is very entertaining. The most interesting component of the first half of this book involves the “split” between Feynman and Dyson on their take on the diagrams (interestingly, early on, the diagrams were often referred to as Feynman-Dyson diagrams) and how this difference could be traced through the postdocs and graduate students who learned the techniques from either Feynman or Dyson. It is interesting how the rigor of Dyson and physical intuition of Feynman could be explicitly seen in how they drew the diagrams. Dyson would draw the diagrams always with right angles, clearly indicating that they were simply a tool for bookkeeping the perturbation theory. Feynman’s diagrams on the other hand, had tilted lines, much more suggestive of the path integral formulation of quantum theory which Feynman had in mind in coming up with the rules for the diagrams.
The second half of the book is dedicated to a study of Geoffrey Chew and his idea of nuclear democracy. I certainly wish that this part of the book had more details, as this story is fascinating, but on the whole the book gives a nice introduction to the S-matrix dispersion tools and the basic ideas of the bootstrap and looks at how diagramatic methods played a role in this work (no longer really Feynman diagrams.) Interestingly I learned that Chew was probably the first professor to resign in protest over the University of California’s requirement of an anti-communist oath. Good for Chew.

Letters in the Sky with Dialogue

Steve Hsu is at it again with an interesting paper, this time with Anthoy Zee (UCSB). And this one has to be read to be believed: physics/0510102:

Message in the Sky

Authors: S. Hsu, A. Zee
Comments: 3 pages, revtex
Subj-class: Popular Physics
We argue that the cosmic microwave background (CMB) provides a stupendous opportunity for the Creator of our universe (assuming one exists) to have sent a message to its occupants, using known physics. The medium for the message is unique. We elaborate on this observation, noting that it requires only careful adjustment of the fundamental Lagrangian, but no direct intervention in the subsequent evolution of the universe.

I especially like the last paragraph:

In conclusion, we believe that we have raised an intriguing possibility: a universal message might be encoded in the cosmic background. When more accurate CMB data becomes available, we urge that it be analyzed carefully for possible patterns. This may be even more fun than SETI.

The Power of Really Really Big Computers

An interesting question at the talk Sunday was, suppose you build a quantum computer, how do you really know that these crazy quantum laws are governing the computer? What I love about this question is that you can turn it around and ask the same question about classical computers! At heart we believe that if we put together a bunch of transistors to form a large computation, each of these components is behaving itself and there is nothing new going on as the computer gets larger and larger. But for quantum computers there is a form of skepticism which says that once your computer large enough, the description of quantum theory will fail in some way (See for example, Scott Aaronson’s “Are Quantum States Exponentially Long Vectors?”, quant-ph/0507242, for a nice discussion along these lines.) But wait, why shouldn’t we ask the question of whether classical evolution continues to hold for a computer as it gets bigger and bigger. Of course, we have never seen such a phenomenon as far as I know. But if I were really crazy, I would claim that the computer just isn’t big enough yet. And if I were even crazier I would suggest that once a classical computer gets to a certain size its method of computation changes drastically and allows for a totally different notion of computational complexity. And if I were flipping mad, I would suggest that we do have an example of such a system, and this system is our brain. Luckily I’m only crazy, not flipping mad, but still this line of reasoning is fun to pursue.

Talking in L.A. Talking in L.A. Nobody Talks in L.A.

Yesterday I gave a talk on quantum computing in Los Angeles at math-club. What is math-club? From their website (“be there or be square”):

People who like books and stories have movies, libraries, and even book clubs to amuse them. Those interested in math and logic have amazon.com and the occasional brain teaser at the back of in-flight magazines.
MATH-CLUB was formed to engage a group of LA’s math-inclined people in analytical discussions under an easy going atmosphere.
The club got rolling under suitably informal circumstances — its founder, Roni Brunn, brought up the idea for a math club at a bar and was lucky enough to do so within ear shot of a friend, Matt Warburton, who liked the concept and thought of others who would as well. Soon after that, we had our first math club meeting.
Stewart Burns lectured on a Sunday afternoon in September, 2002, to a crowd that included David X. Cohen, Kirsten Roeters, April Pesa, Ken Keeler, Daisy Gardner, Sean Gunn, James Eagan, and, of course, Matt, using most horizontal surfaces at Roni’s apartment as seating.
Because many of those most actively involved with MATH-CLUB do have writing credits at shows like “The Simpsons” and “Futurama,” some assume the club is only the province of professional Hollywood writers. In fact, the club as a whole is a diverse group of people who punch the clock as professors, computer programmers, musicians, actors, designers, journalists, and documentarians.
Similarly, people come to meetings with a wide range of math backgrounds. Some of the members have advanced degrees in math and have been published; some are recreational users of math. People do ask questions and explore the topic as a group. We kick off meetings with a cocktail party.

Who could resist talking to such a club? Certainly not me. Especially when I received an email from the organizer, Roni Brunn, which had as it’s subject “math and simpsons.” Such subject lines stimulate approximately 90 percent of my brain. If you include the word “ski” as well, then the extra 10 percent is activated and I go into epileptic seizures. Not a pretty sight.
I am especially fond of the math club analogy with book clubs (which were described to me by one of the people present last night as “mom’s drinking night.” Crap, now I’m going to get an email from my mother.) Why aren’t there more math clubs in the mold of book clubs: where small groups get together to hear about subjects which stretch their critical brains? I certainly think it’s a grand idea and am tempted to start a math club myself (but in Seattle we replace writers with programmers?)
When I was at Caltech I would often attend the talks in fields outside of physics/computer science. I called it “my television.” Certainly hearing a biology or geology talk and trying to discern what the heck they were talking about made me feel like a total moron. But whenever you hear a good talk outside of your field, you get a little bit of the feeling for what is going on, and this feels great. I personally wish that more graduate students would do this in order to help combat the graduate student fatigue which comes from being far to narrowly focused and not remembering why it is that all of science is interesting.
Anyway, it was a fun talk, and hopefully I didn’t punish people too much. When I was in Italy recently, I noticed that when I noticed that the students were not understanding the English I was using, I would try to fix this by reverting back to speaking in idioms, which are simpler, of course, but totally incomprehensible to the Italian students. I noticed last night that I sometimes do a similar thing when I give a talk: when I’m talking about something and I think people are lost, I oscillate back to a language which is far simpler but whose content doesn’t really help to discern my original meaning. What I need to learn to do is to ramp down to medium levels with content. Ah well, much to learn about this “giving talks” thing.

Ig Nobel Prizes 2005

The 2005 Ig Nobel Prizes are out! The physics prize goes went to the University of Queensland:

PHYSICS: John Mainstone and the late Thomas Parnell of the University of Queensland, Australia, for patiently conducting an experiment that began in the year 1927 — in which a glob of congealed black tar has been slowly, slowly dripping through a funnel, at a rate of approximately one drop every nine years.

I’ve actually seen this glob of black tar. Little did I know I was looking at an experiment of Nobel proportions! Makes me wish I’d taken a picture of it.