Theory Like Software

One interesting issue in quantum information science is the lack of hiring of theory people by physics departments in the United States over the last few years (by my count, four hires in the last six years.) I think one of the main reasons for why this may be true is that physics departments are still skeptical over the field of quantum computing. For experimentalists, the hiring situation has been better (although, of course, no piece of cake by any measure!) The reason for this, it seems to me, is that experimental people in quantum information science are doing interesting experiments which push the boundaries of experimental physics. It is easier to justify hiring an experimentalist because there is some belief that they will be able to adjust if quantum information begins to fizzle out (I don’t believe it, but apparently a large number of physicists do.) In essence physics departments feel that they experimentalists are a better hedge than theorists.
But lets take a look at this issue from the perspective of the next few years. As quantum computers grow from four and five qubit computers to ten to a hunderd qubit computers, the experimentalists will be in some ways less tied to fundamental physics and more tied to the engineering and technology of the devices they will be building. And there is a another important factor: not all implementations of quantum computers are going to pay off. Thus while hiring an experimentalists who is working for the implementation which really takes off can be a jackpot for a department, hiring one who is working on the implementation which fizzles can leave the department in exactly the position they are supposedly avoiding by not hiring theorists.
Now look at this from the perspective of hiring a theorist. Quantum information theorists are much more immune to which implementation really takes off. Sure, some theorists are more tied to a particular implementation, but, on the other hand the main bulk of theory is done in a way which is independent of any quantum computing platform. Thus quantum information theorists, like those involved in the computer science of software, are in many ways a more robust hire in the long run than an experimentalist.
Of course, this argument is only a small part of the big picture (i.e. what does happen if the field is a fad? what if you do believe you can pick out the best implementation? what if you only care about hiring someone who will have an impact in the next five to ten years?) but it’s certainly an argument which I wish more physics departments would listen to.

"Physics"

For a while I’ve joked that at the rate storage rates are increasing for hard drives, it will soon be possible that instead of having an MP3 player with all you favorite songs on it, you will simply have a device with “music.” All of music.
Now I learn, via Michael Nielsen’s blog, that Joanna Karczmarek is starting a project to put the entire arXiv.org into bittorrent files. Currently she is offering the hep-th section from all of 2004 via a bit torrent. So, I guess, coming soon to a laptop near you: “physics.” I wonder if this will be the impetus for me to get a new laptop with a monsterous hard drive.

Fifty Dimensional Computers

Sometimes there are conflicts that run deeply in side of me because of my original training as a physicist. One thing I have never understood in classical computer science is why the locality and three dimensionality of our world don’t come into play in the theory of computational complexity. I mean sure, I can understand how you would like to divorce the study of the complexity of algorithms from the underlying medium. Sure I can also understand that those who study architectures spend copious amounts of time dealing with exactly how resources scale due to the constraints of locality and dimensionality. But isn’t it true that a true theory of the complexity of information processing should, at it’s most fundamental level, make reference to the dimensionality and connective of the space in which the computer is built? Perhaps the complexity of cellular automata gets some way towards this goal, but somehow it doesn’t feel like it goes all of the way. Most importantly the usual conditions of uniformity of the cellular automata seem to me to be overly restrictive of a theory of computational complexity which doesn’t ignore issues of dimensionality and locality.
Another interesting spinoff of this line of reasoning is to think about how computation changes when the topology of spacetime is not trivial and even when the topology of spacetime is itself something which can change with time. What is the power of a computer which can change the very notion of the topology of the underlying circuit connectivity?

SQuInT

For those of you who don’t know, SQuInT stands for “Southwest Quantum Information and Technology” and has been having a conference in the Southwest united states for seven years (and even longer if you count some “pre”-SQuInTs.) SQuInT is becoming more and more unique in that it is one of the rare conferences which tries to bring together the different parts of physics which are all involved in trying to build a quantum computer. They even have a few talks by silly theorists like me. My talk wasn’t as good as I had hoped. 30 minutes is pretty darn stringent.
The highlight of the conference, besides the night spent watching the old couples work the dance floor at the “exclusive” resort in Tuscon where the conference was held, was to hear about the work of Robert Schoelkopf from Yale on combining cavity quantum electrodynamics with superconducting qubits. Traditional cavity QED is done with cavities coupling to neutral atoms (in either a microwave or optical regime.) Some of the earliest quantum computing implementations were performed in cavity QED by Jeff Kimble’s lab at Caltech. What Robert talked about was using a cavity to couple to a hybrid superconducting qubit. He showed some really nice results demonstrating the vacuum Rabi oscillations from his coupling of the cavity to his superconducting qubit. An amazing aspect of this system is that the effective dipole moment of the superconducting qubit is about ten thousand times stronger than in neutral atoms. Why is this important for quantum computing? It’s probably most important because one of the most difficult tasks for many solid state quantum computing systems is the ability to perform readout of the state of the qubit with a high reliability and without destroying the system. Robert’s scheme shows a reasonable chance of performing such a task. For those of you who wish to bet with me on what the final quantum computer will look like, the SQuInT conference and Robert’s results in particular have made me recalculate my odds. Please send me an email if you would like to place a bet ; )

Arxiv Data

In fairness, here are all of the different categories from ArXiv.org:
ArXiv Papers
One can certainly see the effect of different fields slowly adopting the arxiv, but it seems that even by around 1996, the use of the arxiv was becoming pretty ubiquitous. Here are the percentages from 1997 to 2004:
Portions for the Arxiv
An interesting trend I was not aware of until plotting this was the rise of the physics category. I’m guessing that this is due to the rise of biophysics. Yes, people, the future is biology! Ack, and the future is bleak for me if I don’t get back to Powerpoint!

The Rise of Quant-Ph

As a break from talk writing, I decided to take a look at the number of papers posted to the quant-ph archive versus the number of papers posted to the hep-th archive on the ArXiv.org Soon we (quant-ph) will take over the world, no?
The Rise of Quant-Ph
Pretty astounding that quant-ph is now almost as active as hep-th.

Cosmic…Dude!

Fermilab and SLAC have come together to produce a free magazine called Symmetry. Last week I received my first issue. In this day when science writing is usually pretty dumbed down, Symmetry seems to have quite a few well written articles.
I especially enjoyed the article on the Pierre Auger Observatory for cosmic rays located in Argentina (Some of you may remember studying the Auger effect in physics where a vacancy of an electron in an inner shell of an atom is filled not by radiating, but instead by ejecting energetic electrons from the outer shells.) Comsic rays are very high energy particles which strike the Earth’s atmosphere and produce spectacular showers of billions of electrons, muons, and other particles. The great mystery, of course, is what produces these highly energetic particles. The first thing you might think is, well just use the shower to locate where in the sky the cosmic rays come from. For low energy cosmic rays, this indeed has been done. But what you find is that they are pretty much randomly distributed across the sky. There’s a simple explanation for this: the galactic magnetic field is strong enough to singificantly bend the direction of the charged particles which produce the cosmic ray showers. Thus you’re not getting a true indication of where the particle is coming from by tracking its direction when it strikes the Earth.
What’s nice about the Auger observatory is that it will be able to detect significant numbers of really high energy cosmic rays. At something like greater than 10^19 electron Volts, the galactic magnetic fields are not able to significantly bend the charged particles. And here is the really neat thing: nobody has any real good idea of what could produce cosmic rays with energies of 10^19 or 10^20 eV. (The world record for such a particle had 3×10^20 eV, 300 million times more powerful than the our most powerful particle accelerator!) One constraining effect that we do know is that these particles must come from somewhere in the local galactic neighborhood. This is because of the GZK cutoff: the cosmic background radiation looks pretty nasty to a particle with greater than 5×10^19 eV. Almost no particles above 5×10^19 eV can survive long at these energies and within something like a few hundreds of millions of years almost all particles will be reduced down to the cutoff. Thus we know that these particles must be coming from our local neighborhood of galaxies (ruling out active quasars, everybody’s favorite explanation for all things energetic.)
The cool thing is that the Auger observatory (which consists of 1600 detectors covering 1200 square miles!) will be able to really begin to pen down where these high energy cosmic rays are coming from. Which is particularly good, considering that there are nearly as many theories about cosmic rays as there are theorists who have studied them. When will we know more? Sometime around August the observatory will be reporting its first results. Exciting stuff!

Finding Ordinary Matter is No Ordinary Matter

Today, thanks to some very beautiful cosmology, we think we know quite a bit about the matter content of our universe. The observed universe is, according to these studies, 1 part ordinary matter, 5 parts dark matter, and 14 parts dark energy. One of the interesting gaps in our understanding of this picture, however, is that when we add up all the numbers, we find that we are missing between 30 to 40 percent of the ordinary matter. One possibility for where this matter may be found is in hot (10^6 K) low density gas in the intergalactic medium. At these high temperatures, atoms like oxygen and nitrogen retain a few bound electrons. But because these are heavy elements with a few bound electrons they will absorb only at very high energies. In order to see this absorbtion, you need to look in the ultraviolet or X-ray regime of the spectrum. Since it’s impossible to test this theory from ground-based instruments, this idea has floated around, but never really been verified.
Now there is news today, published in Nature by Nicastro et. a (vol. 433, p.493), that the Chandra space telescope has indeed detected evidence of this absorption and, with admittedly still large uncertainty, the calculations suggest that indeed this indeed makes the calculations for ordinary matter add up.

Nature Discovers Physics

The journal Nature has finally discovered physics! For a while now there have been specialized Nature journals for different disciplines. Now, starting in October 2005, they’ve discovered physics: Nature Physics.
Oh and look what at the list of what they will cover:

* quantum physics
* atomic and molecular physics
* statistical physics, thermodynamics and nonlinear dynamics
* condensed-matter physics
* fluid dynamics
* optical physics
* chemical physics
* information theory and computation
* electronics, photonics and device physics
* nanotechnology
* nuclear physics
* plasma physics
* high-energy particle physics
* astrophysics and cosmology
* biophysics
* geophysics

Clearly not alphabetical and quantum physics is number one!

Four More Pages

Ken Brown suggests the following solution to the Physical Review Letters “problem”:

I think a possibl[ity] is to make it so no one can submit to PRL. Instead the editors/refs can choose to bump your paper up from PRX. Then the PRX paper would be published in full and the PRL would be a short summary(intro and conclusions) with the details left in PRX.

This is a really intriguing idea. There must be something wrong with it, but I can’t see it right now. One small problem, I think, is that as the system is currently set up, when I submit a paper to Phys. Rev. A, there is usually only one referee for the paper and it seems a bit much to put all of the discission making in two peoples hands (one ref, one editor.)