Genius Grants

The list of this years MacArthur award winners is out. The MacArthur awards are half-million dollar no strings attached awards for “creativity, orginality, and potential.” I will brag that Berkeley ended up with three awards this year. OK, that’s enough bragging. At times like this, I think it is important to revisit this classical article from the Onion:

MacArthur Genius Grant Goes Right Up Recipient’s Nose
October 15, 2003 | Issue 39•40
ALBANY, NY—According to friends, the $500,000, five-year, no-strings-attached MacArthur Fellowship awarded to Jim Yong Kim earlier this month went right up the 43-year-old scientist’s nose. “Kim’s efforts to eradicate drug-resistant strains of tuberculosis in Russian prisons and Peruvian ghettos amazed everyone—as did his appetite for top-grade cocaine,” Marisa Amir said Monday. “As soon as that first check arrived, Kim was on the phone with his dealer, and two hours later, he was in a hot tub full of strippers.” His first installment of money gone, the scientist then returned to the task of developing a whole-cell cholera toxin recombinant B subunit vaccine.

If it really is unrestricted, I wonder if you can take the money and put it all on the number 13 in roulette?

Post Quantum Crytopgraphy

In the comments Who Will Program the Quantum Computers?, Hal points to PQCryto 2006, a conference on post-quantum cryptography. From their webpage:

PQCrypto 2006: International Workshop on Post-Quantum Cryptography
Will large quantum computers be built? If so, what will they do to the cryptographic landscape?
Anyone who can build a large quantum computer can break today’s most popular public-key cryptosystems: e.g., RSA, DSA, and ECDSA. But there are several other cryptosystems that are conjectured to resist quantum computers: e.g., the Diffie-Lamport-Merkle signature system, the NTRU encryption system, the McEliece encryption system, and the HFE signature system. Exactly which of these systems are secure? How efficient are they, in theory and in practice?
PQCrypto 2006, the International Workshop on Post-Quantum Cryptography, will look ahead to a possible future of quantum computers, and will begin preparing the cryptographic world for that future.

Prepare for us world, we quantum computers are coming (well, maybe not so fast 😉 )

Life Around Black Holes

I just started reading A Fire Upon the Deep by Vernor Vinge. Interestingly in Vinge’s universe there is a stratum for the laws of physics in the universe. In particular in the galaxy proper, the speed of light is finite, but as one gets farther away from the galaxy this changes. The futher one gets from the galaxy, the more amazing technology which one can build and operate.
Which got me thinking about Scott Aaronson’s paper NP-complete Problems and Physical Reality. In this paper, one issue Scott discusses something which you will hear over many coffee breaks at quantum computing conferences: can one use relativity to create exponential speedups in computational power. One way you might think of doing this involves using time dialation. Set your computer up to calculate some hard problem. Then board a spaceship and head off for a leisurely trip around the galaxy at speeds nearing the speed of light. When you return to your computer, via the twin paradox the computer will be much older than you, and will, hopefully have solved the problem. Roughly if your speed is [tex]$beta=v/c$[/tex], then you can get a speedup for your computation which is proportional to [tex]$(1-beta^2)^{-frac{1}{2}}$[/tex]. The problem with this scheme, it appears, is that in order to work, you need to get your [tex]$beta$[/tex] exponentially close to the speed of light, and this would require an exponential amount of energy. So it doesn’t seem that such a scheme would work. Another proposal, along similar lines, is to set up your computer, and then travel very close to a black hole (not recommended, only trained professionals should attempt this.) Then due to the gravitational time dilation, you can mimic the twin experiment and return to a (hopefully) solved problem. However, again, it seems that to get yourself out of the computational well requires an amount of energy which will destroy the effect.
But what Vinges’s novel got me thinking about was the following: there appears to be a computational advantage to being away from masses. Assume that there is some form of life surrounding a black hole (OK, big assumption, but hey!) Then it seems to me that this computational advantage for an intelligent species might contribute to a competetive advantage in the evolutionary sense. Thus we might expect that a civilization for which gravitational time dilation is a real effect will live in a world, much like Vinge’s world, where lesser intelligent animals live close to the mass and the more intelligent, more magic wielding creatures live farther away (“Any sufficiently advanced technology is indistinguishable from magic”-Arthur C. Clarke.) Following Vinge’s novel, one might even speculate about the comutational advantage of being outside the galaxy. The time dilation effect there is about one part in one million (as opposed to one part in a one billion for the effect from being at the surface of the earth versus “at infinity.”) Unfortunately this seems like to small of a number to justify any such effect.
OK, enough wild speculation for a Tuesday.

Best Title Ever Submission: Pancakes

Steve submits the following for best paper title ever, astro-ph/0509039: “Local Pancake Defeats Axis of Evil” by Chris Vale. I wonder if Chris has informed North Korea, Iraq, and Iran that they have been defeated by a local pancake (is it better to be defeated by a nonlocal pancake?)
Actually this paper is about a very interesting subject. One can learn all sorts of interesting things about the cosmological history of our universe from the angular spectrum of the cosmic background radiation. I remember as a graduate student, when I thought I might go into astrophysics, taking a class in which we calculated this spectrum for all sorts of different cosmological models. The bumps in the spectrum across different spherical harmonics were distinctly different for many different models. Well now, thanks to experiments like WMAP, we have exceedingly good information about this spectrum (the music of the univerese, in poetic language.) This allows us to very nicely rule out all sorts of models about the early history of our universe. Interestingly, however, there is a possible unexplained feature in the spectrum. This is that the l=2 and l=3 components appear to be correlated. One explanation of this effect is simply that this is a statistical fluke. This explanation is commonly refered to as the “axis of evil” theory! The “local pancake” in the title of the paper refers to the authors theory about this l=2,l=3 anomoly: he postulates that it is the effect of gravitational lensing due to the structure of mass in our local neighborhood (bet you didn’t know we lived in a pancake did you?) This lensing, the author claims, will have a consequence for the l=1 (dipole) term. But why does this change the l=2, l=3 components? Because the l=1 dipole term is usually subtracted out from the data because it has large components due to our proper motion with respect to the cosmic background radiation. The author claims that the lensing effect causes this l=1 dipole term to be subtracted incorrectly. Any good astrophysicists out there slumming on a quantum blog care to comment?

Delta X Delta P

The science blogosphere is abuzz about Lisa Randall’s op-ed article in the New York Times. See comments at Hogg’s Universe, Not Even Wrong, Lubos Motl’s Reference Frame, and Cosmic Variance. The article just made me happy: read the following paragraph

“The uncertainty principle” is another frequently abused term. It is sometimes interpreted as a limitation on observers and their ability to make measurements. But it is not about intrinsic limitations on any one particular measurement; it is about the inability to precisely measure particular pairs of quantities simultaneously. The first interpretation is perhaps more engaging from a philosophical or political perspective. It’s just not what the science is about.

There is nothing that makes my Monday mornings brighter than a correct popular explanation of the uncertainty principle.

Optimistic Deutsch

David Deutsch thinks quantum computing is just around the corner. He has posted the following on his blog:

For a long time my standard answer to the question ‘how long will it be before the first universal quantum computer is built?’ was ‘several decades at least’. In fact, I have been saying this for almost exactly two decades … and now I am pleased to report that recent theoretical advances have caused me to conclude that we are within sight of that goal. It may well be achieved within the next decade.
The main discovery that has made the difference is cluster quantum computation, which is a marvellous new way of structuring quantum computations which makes them far easier to implement physically.

H-index Me

Many of you have probably already seen this. Jorge Hirsch, a physicist from UCSD, has proposed an interesting way to measure research impact of an author. For details, see this Nature article or Hirsch’s original article physics/0508025. The basic idea of Hirsh’s h-index is very simple. The index is simply the number of papers which the author has written which have more citations than this number of papers. Thus, for instance, if an author had written five papers with the following number of citations, 10, 6,4,2, and 1, then the h-index would be three because the forth most cited paper has only two citations, which is less than four, but the third most cited paper has four citations which is greater than three. The highest h-index among physicists, Hirsch claims, is Ed Witten who has an h-index of 110. This means he has written 110 papers with greater than 110 citations. Wow! Another important quantity Hirsch defines is the average rate at which an h-index has been changing per year over a career. This is just a person current h-index divided by the time since they first started publishing. Witten has an astounding value of 3.9 increase in h-index on average per year over his career. What this all means is very much open to debate, but heck, it’s kind of fun!
One thing which is nice about the h-index is it is very simple to calculate it using the ISI Web of Science citation tools or, more dangerously, from citebase. My h-index from citebase (access to Web of Science is painful from my current computer location) is 12. The funny thing is that Hirsch says that this is about the h index (with large error bars) at which one should get tenure. Haha, very funny!

Quantum Dot Qubits

I had seen these results in a previous talk, but now the paper is out in Nature Science. Charles Marcus’s group at Harvard has been working on building a qubit from a semiconductor quantum dot. One of the difficulties for this type of qubit is that the qubit couples via a hyperfine interaction to surrounding nuclei. If you don’t do anything about this hyperfine interaction, this causes a typical Rabi flopping experiment to exhibit decoherence rates of 10 nanoseconds. Way to short to make a useful quantum computer! But what is nice about the hyperfine decoherence mechanism, is that it is a constant coupling which can be overcome by doing a spin echo experiment. In the above paper, Marcus and colleagues show that with appropriate spin echo techniques, they get lifetimes of 1 microseconds. Nothing like that many orders of magnitude improvement, no?

My Fermion is a Boson

Recently I have been reading Quantum Field Theory Of Many-body Systems: From The Origin Of Sound To An Origin Of Light And Electrons by Xiao-Gang Wen. The first half of this book is a very well written introduction to quantum field theory in many-body systems. But what is really interesting is the second half of the book where Wen describes some of his and other’s research on interesting many-body quantum spin systems. One point which Wen is particulary excited about is that fermions can appear as quasiparticles in local bosonic lattice systems.
The place where I first learned about this sort of thing was some of the work I did in my thesis where I used the Jordan-Wigner transformation in one dimension (A good read: Michael Nielsen’s notes on the Jordan-Wigner transform.) Suppose you have a one dimensional lattice of fermions, where the fermions only interact between nearest neighbors. Let [tex]$a_i$[/tex] and [tex]$a_i^dagger$[/tex] be the annihilation and creation operators at the site [tex]$i$[/tex]. These being fermions, these operators satisfy [tex]${a_i,a_j^dagger}=delta_{i,j}$[/tex] and [tex]${a_i,a_j}=0$[/tex]. In the Jordan-Wigner transform, we replace each fermion site by a qubit, then we perform the map [tex]$a_irightarrow – prod_{j=1}^{i-1} Z_jfrac{1}{2} (X_i + i Y_i)$[/tex]. One can easily check that this mapping preserves the fermion commutation relations. Under this mapping, we can map our nearest neighbor fermion model to a nearest neighbor qubit model. It is exactly this kind of mapping, for more interesting systems, that Wen is excited about.
An interesting question to ask is how to perform the above mapping for lattices of dimension higher than one. To this end, you will notice that the mapping used above has a linear ordering and hence is not well adapted to such a task. In particular if you try to use the mapping in this manner, you will end up creating qubit Hamiltonians with very nonlocal interactions. In fact, many have tried to create higher dimensional Jordan-Wigner transforms, but in general, there were always limiations with these attempts. To this end, the recent paper cond-mat/0508353 by F. Verstraete and J.I. Cirac is very exciting. These two authors show that it is possible to convert any local fermion model into a local model with qubits (or qudits), i.e they effectively solve the problem of creating a Jordan-Wigner transform on higher dimensional lattices.
One of the points that Wen likes to raise from this work is the question of whether fermions are actually fundamental. From what I understand, while there are examples of fermions arising from these local interacting boson modes, it is not known how to do this with chiral fermions. Strangely I’ve always been more inamored with fermions than with bosons (holy cow am I a geek for writing that sentence.) But perhaps my love of bosons will have to start growing (oh, that’s even worse!)