More Dice

The full t’ ‘t Hooft (look I put the apostrophy in the correct location!) article is now posted at Physics World (not Physics Today, as I listed incorrectly in my first post) commentary by Edward Witten, Fay Dowker, and Paul Davies. Quick summary: Witten thinks that quantum cosmology is perplexing, Dowker worries about the emergence of classical physics, and Davies postulates that complexity is the key to understanding the emergence of classicality. Davies suggests that quantum mechanics will break down when the Hilbert space is of size 10^120 and suggests that quantum comptuers will fail at this size. His argument could equally be applied to probablistic classical computers, and so I suggest that if he is right, then classical computers using randomness cannot be any larger than 400 bits.

Digits or Orders

How well verified is the theory of quantum electrodynamics (QED)? If you ask this to most physicists one of the first things that comes to their mind is the agreement of QED’s theoretical calculation of the anomolous magnetic moment of the electron and the extremely precise measurement of this moment. In fact, last night, while I was spending my time usefully watching the Colbert Report on Comedy Central, guest Brian Greene brought up exactly this example (well he didn’t exactly say this is what he was talking about, but it was pretty clear. The interview, by the way, was pretty funny.)
The electron magnetic moment anomoly is [tex]$a={g-2 over 2}$[/tex], measuring the deviation of the electron magnetic moment from it’s unperturbed g value of 2. Experiments done here at the University of Washington by the Dehmelt group in the late eighties gave an experimentally determined value of the anomoly of [tex]$a=1159652188.4(4.3) times 10^{-12}$[/tex] where number in parenthesis is the error. Now that’s a pretty precise measurement! On the other side of the physics departments, theorists have calculated the a value of the anomoly in quantum electrodynamics. This calculation yields an expression for the anomoly in powers of the fine structure constant. This requires calculating Feynman diagrams to eighth order in perturbation theory. The current theoretical calulculation yields an expression, to eighth order of
[tex]$a_{th}=A_2 left({alpha over pi}right)+ A_4 left({alpha over pi}right)^2+ A_6 left({alpha over pi}right)^3+ A_8left({alpha over pi}right)^4$[/tex]
where
[tex]$A_2=0.5$[/tex]
[tex]$A_4=0.328478965579 dots$[/tex]
[tex]$A_6=1.181241456 dots$[/tex]
[tex]$A_8=- 1.7366(384) $[/tex]
The first three of these terms is basically ananlytically known (i.e. can be readily obtained from functions which we can numerically calculate to any desired accuracy) and the last term, which has an error in it, is obtained by a numerical evaluation. So how well do theory and experiment agree? Well we need a value of the fine structure constant! There are many experiments which can be used to determine the fine structure constant. Among the best are experiments done using the quantum Hall effect and yield [tex]$alpha^{-1}=137.0360037(33) [2.4 times 10^{-8}]$[/tex] where the number in bracket is a fractional uncertainty. Using this value of the fine structure constant in the perturbative expansion for the theoretical expression give [tex]$a_{th}=1159652153.5 (1.2)~(28.0) times 10^{-12}$[/tex] where the number in the first parenthesis is the error from the theory calculation and the second is the error comming from the uncertainty in the value of the fine structure constant.
So, now returning to the question I started with, how well verified is QED? Well in the regime where these experiments have been preformed the results agree to an amazying precision. And when explaining this to the public, it is certainly valid to count the number of digits to which this calculation agrees with experiment. But for me, I’m more confortable saying that the above discussion shows that we’ve verified quantum electrodynamics to eighth order in perturbation theory (or to fourth order in its coupling constant.) Why do I prefer this? Well mostly because, as I understand it, modern particle theory basically says that QED must be an effective field theory for some deeper theory. Or in other words, it can’t be QED all the way down. Thus it seems more proper to ask how far down the perturbation ladder we’ve verified QED. And again, while eigth order may not sound as amazing as ten, eleven, or twelve digits of precision, it still is an amazing verfication.
And anyway, who says we should be using base ten for our measure of precision? Me, I’m in a computer science department, so it seems that base two would be much better (and you might even convince me that natural logarithms are even better.)

Best Title Ever Submission: Cryptobaryons!

I thought that Physical Review Letters had a policy about using new words in titles to papers. How then, did Cryptobaryonic Dark Matter by C. D. Froggatt and H. B. Nielsen get by the censors?

It is proposed that dark matter could consist of compressed collections of atoms (or metallic matter) encapsulated into, for example, 20 cm big pieces of a different phase. The idea is based on the assumption that there exists at least one other phase of the vacuum degenerate with the usual one. Apart from the degeneracy of the phases we only assume standard model physics. The other phase has a Higgs vacuum expectation value appreciably smaller than in the usual electroweak vacuum. The balls making up the dark matter are very difficult to observe directly, but inside dense stars may expand absorbing the star and causing huge explosions (gamma ray bursts). The ratio of dark matter to ordinary matter is expressed as a ratio of nuclear binding energies and predicted to be about 5.

Dicey t' Hooft

Does God Play Dice? is not a treatise on religion and gambling, but is instead Gerard ‘t Hooft’s submission to Physics Today Physics World concerning the reconcilliation of quantum theory with general relativity. The most interesting part of this short note is not that ‘t Hooft comes out squarely on the side of hidden variable theories, but instead in his description of an idea for how general relativity might arise in such a theory:

An even more daring proposition is that perhaps also general relativity does not appear in the formalism of the ultimate equations of nature. This journal does not allow me the space to explain in full detail what I have in mind. At the risk of not being understood at all, I’ll summarize my explanation. In making the transition from a deterministic theory to a statistical treatment — read: a quantum mechanical one —, one may find that the quantum description develops much more symmetries than the, deeper lying, deterministic one. If, classically, two different states evolve into the same final state, then quantum mechanically they will be indistinguishable. This induces symmetries not present in the initial laws. General coordinate covariance could be just such a symmetry.

That general coordinate covariance may not be fundamental but is instead a product of our inability to access the beables of a theory seems like quite an interesting idea. It would be interesting to think if this type of hidden variable theory, which is not totally general because it needs to recover the general coordinate covariance, is indeed large enough to be consistent with quantum theory. I.e. in the same way the Bell’s theorem rules out local hidden variable theories, is there a similar theorem ruling out ‘t Hooft’s property? I certainly have no inclination about the answer to this question in either direction.
Of further interest, ‘t Hooft claims as motivation for his perspective the following

Nature provides us with one indication perhaps pointing in this direction: the unnatural, tiny value of the cosmological constant. It indicates that the universe has the propensity of staying flat. Why? No generally invariant theory can explain it. Yet, if an underlying, deterministic description naturally features some preferred flat coordinate frame, the puzzle will cease to perplex us.

Finally, for no reason but to turn some portion of the readers of this blog happy and the other portion of this blog angry, here is ‘t Hooft on string theory:

I am definitely unhappy with the answers that string theory seems to suggest to us. String theory seems to be telling us to believe in “magic”: duality theorems, not properly understood, should allow us to predict amplitudes without proper local or causal structures. In physics, “magic” is synonymous to “deceit”; you rely on magic if you don’t understand what it is that is really going on. This should not be accepted.

I wish I understood what ‘t Hooft means in this critique by “proper local or causal structures.”

E=mcHawking

Stephen Hawking, after being taken off his resperator, had to be resuscitated:

“They had to resuscitate, and that panicked a few people,” Bristol told the audience. “But he’s been there before.”

OK, there certainly is no debate: Stephen Hawking is more hard core than any other physicist out there. Hard core.

Anyons, Anyone?

A fundamental concept in modern physics is the idea of indistinguishable particles. All electrons, for example, as far as we can tell, are totally alike. All photons, as far as we can tell, are totally alike. Because these particles are indistinguishable, when we exchange such particles, this must not have an observable consequence on the physics of our system. What does this mean? This means that the exchange of two indistinguishable particles must be a symmetry of our system! In nature we find that this manifest itself in two ways: either the wave function of two indistinguishable particles is multiplied by minus one when we exchange the particles, or the wave function is unchanged when we exchange the particles. Indistinguishable particles which obey the first rule are called fermions and those that obey the later rule are called bosons. All of the fundamental particles we know today are either fermions or bosons.
Often, in systems made up of many interacting systems, we find that excitations in these systems have many of the properties of normal particles. Such entities we call quasiparticles. Quasiparticles themselves will be (effectively) indistinguisable, and so too will posses a symmetry under exchange of their positions. An interesting question to ask is what happens to quasiparticles under exchange of their positions. Well, what we find experimentally is that the quasiparticles are almost always fermions or bosons. And of course, what is interesting is the fact that I just said “almost always.” There are cases where we think that the quasiparticles obey rules different from fermions and bosons.
Now a little detour to explain what these different quasiparticles are. Suppose you swap two particles by moving each half way around a circle. Suppose you are viewing this from a fixed point so that you see that the particles are swaped in a clockwise direction. Compare this operation to the process of swapping the particles by moving them half way around the circle in the counterclockwise direction. Are these different processes (that is can we distinguish between these two processes?) Draw an line between the starting points of the two particles. Now rotate the circle about this axis. We now see that we can continuosly deform the clockwise process into the counterclockwise process. We say that in three spatial dimensions, these two processes are topologically indistinguishable. But now imagine that we live in a world with two spatial dimensions. In such a world we can’t perform the above trick. We can’t rotate about that axis between the two particles because that would take us out of our two (spatial) dimensional world. What does this mean? Well in three spatial dimensions, we see that the symmetry which concerns us for indistinguishable particles is that of the symmetric group, where we just permute labels. But in two dimensions, swapping in the counterclockwise direction is different than swapping in the clockwise direction. This means that the symmetry of swapping particles is no longer the symmetric group, but instead is a group called the braid group. If we track the worldlines of particles in two spatial dimensions plus one time dimension, these paths will “braid” each other.
Now back to the story at hand. I said that particles are indistinguishable and so when we swap them this should have no observable consequence. And I said that all fundamental particles and most quasiparticles did this by multiplying the wave function by plus or minus 1. But you might ask why isn’t it possible that we can multiply the wave function by some other phase: say [tex]$e^{i theta}$[/tex]? Well, we know that swapping clockwise around our circle and counterclockwise around our circle are inverse operations of each other. Thus if going clockwise around our circle gave us a phase of [tex]$e^{itheta}$[/tex] then going counterclockwise around our circle should give us a phase of [tex]$e^{-itheta}$[/tex]. But in three spatial dimensions we saw that these two processes were topologically equivalent. This means that [tex]$e^{i theta}=e^{-i theta}[/tex]. But this is only true of [tex]$theta$[/tex] is an integer multiple of pi. Indeed we see that in three spatial dimensions the phase can only be plus or minus one when we swap these particles (there is also the possibility of parastatistics where one uses an internal degree of freedom and obtains higher dimensional representations of the symmetric group. But there is a way to make particles which obey parastatistics to look like they are composites made up of fermions or bosons and hence there is a good reason to say that in three spatial dimensions there are only fermions and bosons.) Now what is cool is that as we argued above, the argument we gave above falls appart for particles in two spatial dimensions. The clockwise and the counterclockwise swap are different in two spatial dimensions, so there is no requirement that [tex]$e^{i theta}=e^{-i theta}[/tex].
This leads one to the postulate that in two spatial dimensions, particles can obey statistics where swapping the particles results in multiplying the wave function by a phase factor [tex]$e^{i theta}$[/tex] where [tex]$theta$[/tex] is not an integer multiple of pi. Such particles are called anyons (the name was coined by Frank Wilczek in 1982. For an interesting interview of Wilczek and anyons see here) Of course we don’t live in a world with two spatial dimensions (we are not Flatlanders, eh?) so one might think that the possibility of anyons existing is nill. Oh well, interesting theory, but no practical applications. Right? Nope.
Remember that we also know that in many body systems there are quasiparticles which act very much like normal particles. And in many body systems we can imagine confining these systems such that they are (effectively) two dimensional. One way to do this is to use semiconductor technology to create very thin flat interfaces of differing materials. Then, by applying an electric field perpendicular to these layers, you create a potential well which can confine electrons along this perpendicular direction. If you cool the system down, then only the lowest energy level of this perpendicular direction will be occupied and the electron(s) will behave as two dimensional objects. So in such systems we might hope that there are quasiparticles which exhibit anyonic properties. Do such systems exist? To answer this question we need another detour. This time we need a detour into the quantum Hall effect.
The Hall effect is a classical effect which you learn about when you first learn classical electrodynamics. Suppose you apply a voltage across two edges of plate. Current will move between these two edges. Now if we apply a magnetic field perpendicular to this plate, the electrons moving along this one direction will experience a Lorentz force in the plate perpendicular to the current. This results in a voltage drop across the plate perpendicular to the current direction. This is the Hall effect. Equilbrium for this setup will occur when the charge buildup on the edges parallel to the current produces an electric field which exactly balances the applied magnetic field. A simple calculation shows that the Hall conductance for this setup is [tex]{ne over B}[/tex] where [tex]$n$[/tex] is the number density for the current carriers, [tex]$e$[/tex] is the electric charge, and [tex]$B$[/tex] is the magnetic field strength. The resistivity, which is the inverse of the conductance, in the Hall effect varies linearly with the applied magnetic field.
In 1980 von Klitzing discovered that if one took one of the two dimensional quantum systems described above, applied a strong magnetic field (a few Tesla) and cooled the system to a few Kelvin, then the Hall resistence no longer varies linearly with the applied magnetic field. In fact the resistence showed a series of steps as a function of the strength of the applied magnetic field. This effect is known as the integer quantum Hall effect. How to explain this effect? Well when we apply a perpendicular magnetic field to our two dimensional system, this results in a change in the energy levels of the system. In particular what happens is that instead of having a continuous set of allowed energy levels for the electrong gas, the levels are now quantized into different discrete (highly degenerate) energy levels. These energy levels are seperated by an amount of energy given by Planck’s constant times the cyclotron frequency of the electrons (the cyclotron frequence is [tex]$omega_C={eB over m}$[/tex], where [tex]$m$[/tex] is the (reduced) mass of the charge carrier. This is the frequency which an electron will circle at in an applied magnetic field.). Now recall than in an electron gas at zero temperature, we fill up the energy levels up to the Fermi energy. But now apply a perpendicular magnetic field, and the different energy levels (called Landau levels) will fill up. But there will be cases where the Fermi energy lies in a gap between the Landau levels. Thus in order for electrons to scatter out of the filled energy levels, they must overcome this energy gap. But at low temperatures they can’t do this and so there is no scattering. Varying the magnetic field moves the spacing between the energy levels. Thus over a range of values where the Fermi energy is in between the Landau levels the Hall resistance will not change. This is the origin of those plataues observed by von Klitzing. We can define a quantity, called the filling factor which tells us how full each Landau level is. At integer values of the filling factor we will observe the effects of the gapped Landau level. This is why it is called the integer quantum Hall effect.
Continuing with our story, in 1982, Stormer and Tsui performed a cleaner version of von Klitzing’s experiment and observed that not only is their a quantum Hall effect for integer values of the filling factor, but that there is also a quantum Hall effect for fractional values of the filling factor. Amazingly these were at very simple integer fractions like 1/3, 1/5, 2/5, etc. For the integer quantum Hall effect, we are essentially dealing with a theory with quasiparticles which are weakly interacting. But for the fraction quantum Hall effect, it was soon realized that the effect must arise from an effect of strongly interacting particles. In 1983 Robert Laughlin put forth a theory to explain the fractional quantum Hall effect by introducing an famous anstatz for the wavefunction of this system which could succesfully explain the plateaus in the fractional quantum Hall effect (well, further modifications were needed for higher filling factor effects.) Now what is interesting, and getting back to our main story, is that this wavefunction has quasiparticle excitations which are anyons! In fact, these excitations would not only behave like they had anyonic statistics, but would also behave like they had fractional values of their charge.
Now the question arises, well the theory explaining the fractional quantum Hall effect has anyonic quasiparticles, but has the effect of these fractional statistics ever been observed. Well there were early experiments which were consistent with such an interpretation, but a really convincing experiment which would directly verify the fractional statistics has never been performed. That is until recently. In Realization of a Laughlin quasiparticle interferometer: Observation of fractional statistics by F. E. Camino, Wei Zhou, and V. J. Goldman (all from Stony Brook University) the authors desribe an experiment in which a very cool interferometer experiment is used to directly verify the fractional statistics of the quasiparticle excitations in the fractional quantum Hall effect! This is very cool: the direct observation of fractional statistics!
To me, this whole story, from the theoretical ideas of differing statistics in two dimensions, to the physics of many body strongly interacting systems, and finally to the design of clever, hard experiments to verify the theory behind this strange physics, is one of the most beautiful results in modern physics. Interesting it is also possible that there are anyons which obey nonabelian statistics. This means (roughly) that the wavefunction is not multiplied by a phase under exchange (which is a process which commutes for all such exchanges), but instead another degree of freedom is multiplied by a matrix (so that the noncommutative nature of the braid group is directly realized.) There are some theories of the fractional quantum Hall effect which suggest that these particles might be nonabelian anyons. A complete discussion of nonabelian anyons would lead us on another fascinating story (see John Preskill’s notes on topological quantum computing for an awesome introduction to this subject.) But what is even cooler to ponder is that the experiment preformed in the above article is bringing us one step closer to the possibility that even these strange particles with even stranger statistics may be tested in a similar beautiful experiment. Now that would be an awesome experiment!

Nature Physics Versus PRL

On returning home from Boston (one and a half days late due to 1. a broken plane and 2. a missed connection in Dulles), I found I had received my first copy of Nature Physics. I must say, I was very impressed by the first issue. Why? Well first of all are the gorgeous color pictures. Everyone loves color pictures, right? Seeing a journal of well written, beautifully typeset and illustrated physics articles appeals to the right side of my brain (the part that hasn’t shriveled away from too much math 😉 ) Second was that, unlike PRL, of which there is absolutely no way one can read all of the articles in a single issue, Nature Physics has a reasonable number of well written articles (at least for this initial issue.) I see from their author submission page that letters are limited to 1500 words long (typically four pages long) and articles to 3000 words long, which is a bit more freedom than that afforded by Physical Review Letters. I suspect that Nature Physics will quickly become a prestigious place to publish ones work. Of course, in an idealized world, it wouldn’t matter where you publish your work, as long as it was excellent work. On the other hand, it is true that due to the selectivity of different journals I’m more likely to find excellent work in certain journals than in others.

Signaling and Information

One basic concept which has emerged in classical physics (and then moved on to the quantum world) is that it doesn’t appear possible to send information faster than the speed of light. What I find fascinating about this is that it seems to be, like Landaur’s principle, a statement which connects physics (special relativity, local field theory, quantum field theory, etc.) with, essentially, information theory (the concept of a signal, the concept of information capacity).
But now suppose, as I have argued before, that what makes a physical system a “storage” device is very much a matter of the physics of the device. Thus, for instance, I could try to encode information into the position of a particle on a line. Is this a good way to encode information? Well, certainly we could try to do this, but in the real world it will be very hard to achieve many orders of magnitude precision on this measurement because the system will interact with the rest of the world. While isolated we can talk about such an encoding, but as we crank up the real world meter, we find that there are limits on this encoding. Or to put it another way, the ability of the system to store information is a function of how it interacts with the rest of the world, a function of the physics of the system.
And if the ability of a system to store information is a function of the physics of the system, why isn’t the no-faster-than-light rule, a rule which is essentially about information transmission in physical systems, also a function of the physics of the system?