E=mcHawking

Stephen Hawking, after being taken off his resperator, had to be resuscitated:

“They had to resuscitate, and that panicked a few people,” Bristol told the audience. “But he’s been there before.”

OK, there certainly is no debate: Stephen Hawking is more hard core than any other physicist out there. Hard core.

Shor's Legacy

Quantum computation was put on the map, so to speak, with Peter Shor’s discovery in 1994 that quantum computers could efficiently factor numbers. It is interesting to contemplate how this discovery will be viewed in our distant future. We are now over ten years out from Shor’s discovery. Perhaps this is too short of time to make a judgement, put let’s be arrogant and try to make such a historical judgement. Along these lines, I’d claim that Shor’s algorithm is one of the most significant discoveries about algorithms in modern history. There are a certain group of algorithms, like Euclid’s algorithm for computing a greatest common denomenator, which, in my mind are among the most beautiful, eternal algorithms which we know. (Algorithms from the code book, so to speak.) I would like to make the claim that Shor’s algorithm belongs in the same category as these algorithms. First of all Shor’s algorithm solves a problem, factoring, which feels like one of those problems which lies very close to the base of mathematics. Certainly most of us learned about factoring numbers at a very young age, and indeed the view of a number as a product of its factors shapes in a deep way the manner in which we think about integers. Second, Shor’s algorithm use of period finding reveals in a deep way how quantum theory changes the way we can process information. That global properties of symmetric quantum states can be extracted from quantum systems, but not for their equivalent classical systems, is a deep observation and one which we are only now beginning to push in new directions.
So, I’ll claim, Shor’s algorithm is a very profound discovery which will be thought of for centuries to come as one of our humanities most beautiful discoveries. Interestingly, I think, this also puts a huge cloud over those of us working to try to discover new quantum algorithms. For instance, Sean Hallgren’s discovery that quantum computers can efficiently solve Pell’s equation is an awesome result, but viewed in light of factoring, well it’s just hard to compete! Certainly one effect of this is that looking for new quantum algortihms has gained a reputation as being a very difficult field. And indeed, if you define difficult as coming up with something which is more deep that Shor’s algorithm, then working towards new quantum algorithms can seem a hopeless task. Indeed this is one of ther reasons there has been so much focus on the hidden subgroup problem: an efficient algorithm for this problem would lead to efficient algorithms for the graph isomorphism problem and certain k-shortest vector in a lattice problems, and these are big problems, whose solution would represent big progress in algorithms. So we in quantum algorithms, set our sights extermely high, because Shor set a bar which is at world-record pole-vault heights. Certainly this point of view argues for working on much smaller progress in quantum algorithms: understanding more basic simple primitives even if they don’t lead to algorithms which are “better than Shor.” I, myself, struggle with thinking along these lines: our expectations are high and it is hard to not try to jump over the Shor barrier. But perhaps those with better self control can move in these directions and maybe, if we are lucky this progress will eventually show us new ways that quantum computers can outperform they nasty rivals, classical computers.

Ski Season 05-06, Day 1

This last Saturday I had the opportunity to go on my first ski trip of the season. Amazingly the northwest has been getting pounded by snow over the last few weeks. Now this posed a problem, because, as some of you may recall at the end of last season I did the stupidest thing I’ve ever done, and had to be rescued by the ski patrol. In the process, my skis from last year were completely destroyed. Well with the snow starting to fall, this made me extremely nervous. Would, after nearly twenty years of skiing I finally be forced to actually rent skis again? The horror! Well I went to rent skis at a local shop, and well, you know how it is. You see the nice pretty new skis and you just can’t resist. So I’m now the proud owner of a pair of 2005 Rossignol Bandit B2 182cm skis. And a helmet.
Anyway, we made the two and half hour drive from Seattle to Mt. Baker. The snow was good. They had a bit over a foot of new snow. The first runs were very nice and at the end of the day, when I’d remembered how to ski again, it began snowing again and so was again very nice. Mt. Baker is a very flakey ski resort. They have like one trail map sign for the entire resort. There were a couple of place I would have liked to ski, but they need another few feet of snow before I take my new skis on those runs.
Two highlights from the trip. One was early in the day. I hit a jump, not realizing that there was another jump immediately after it. I ended up nearly eating my ski tips, but somehow managed to stay upright over the second jump. I’m not sure what happened to my friend Bill who was following me over these two bumps. All I know is that we were under the chair lift and the most amazing groans and shouts of awe came from the lift. It must have been pretty good to illicit such a vocal response. The other highlight came at the end of the day. It was snowing and the conditions were such that you couldn’t see ANYTHING. I mean I had absolutely no depth perception. So I skied by following one of my fellow travels, a fellow by the name of Brendan. He’s a boarder and he was flying down the hill with me close on his heals. Turn. Turn. Turn. Tur….whooops that turn was in the middle of the air. Brendan had hit a dropoff mid turn. I was able to stop, but I can tell you there is nothing funnier than watching the increasing panic in someone who is midair and wondering, “where the heck is the ground.” Priceless.

Edgy Deutsch

Via Geomblog, I see that David Deutsch has won the 2005 $100,000 Edge of Computation Science Prize (see the post here.) Here is the award citation:

Although the general idea of a quantum computer had been proposed earlier by Richard Feynman, in 1985 David Deutsch wrote the key paper which proposed the idea of a quantum computer and initiated the study of how to make one. Since then he has continued to be a pioneer and a leader in a rapidly growing field that is now called quantum information science.
Presently, small quantum computers are operating in laboratories around the world, and the race is on to find a scalable implementation that, if successful, will revolutionize the technologies of computation and communications. It is fair to say that no one deserves recognition for the growing success of this field more than Deutsch, for his ongoing work as well as for his founding paper. Among his key contributions in the last ten years are a paper with Ekert and Jozsa on quantum logic gates, and a proof of universality in quantum computation, with Barenco and Ekert (both in 1995).
One reason to nominate Deutsch for this prize is that he has always aimed to expand our understanding of the notion of computation in the context of the deepest questions in the foundations of mathematics and physics. Thus, his pioneering work in 1985 was motivated by interest in the Church-Turing thesis. Much of his recent work is motivated by his interest in the foundations of quantum mechanics, as we see from his 1997 book.

Congrats to David Deutsch!

Anyons, Anyone?

A fundamental concept in modern physics is the idea of indistinguishable particles. All electrons, for example, as far as we can tell, are totally alike. All photons, as far as we can tell, are totally alike. Because these particles are indistinguishable, when we exchange such particles, this must not have an observable consequence on the physics of our system. What does this mean? This means that the exchange of two indistinguishable particles must be a symmetry of our system! In nature we find that this manifest itself in two ways: either the wave function of two indistinguishable particles is multiplied by minus one when we exchange the particles, or the wave function is unchanged when we exchange the particles. Indistinguishable particles which obey the first rule are called fermions and those that obey the later rule are called bosons. All of the fundamental particles we know today are either fermions or bosons.
Often, in systems made up of many interacting systems, we find that excitations in these systems have many of the properties of normal particles. Such entities we call quasiparticles. Quasiparticles themselves will be (effectively) indistinguisable, and so too will posses a symmetry under exchange of their positions. An interesting question to ask is what happens to quasiparticles under exchange of their positions. Well, what we find experimentally is that the quasiparticles are almost always fermions or bosons. And of course, what is interesting is the fact that I just said “almost always.” There are cases where we think that the quasiparticles obey rules different from fermions and bosons.
Now a little detour to explain what these different quasiparticles are. Suppose you swap two particles by moving each half way around a circle. Suppose you are viewing this from a fixed point so that you see that the particles are swaped in a clockwise direction. Compare this operation to the process of swapping the particles by moving them half way around the circle in the counterclockwise direction. Are these different processes (that is can we distinguish between these two processes?) Draw an line between the starting points of the two particles. Now rotate the circle about this axis. We now see that we can continuosly deform the clockwise process into the counterclockwise process. We say that in three spatial dimensions, these two processes are topologically indistinguishable. But now imagine that we live in a world with two spatial dimensions. In such a world we can’t perform the above trick. We can’t rotate about that axis between the two particles because that would take us out of our two (spatial) dimensional world. What does this mean? Well in three spatial dimensions, we see that the symmetry which concerns us for indistinguishable particles is that of the symmetric group, where we just permute labels. But in two dimensions, swapping in the counterclockwise direction is different than swapping in the clockwise direction. This means that the symmetry of swapping particles is no longer the symmetric group, but instead is a group called the braid group. If we track the worldlines of particles in two spatial dimensions plus one time dimension, these paths will “braid” each other.
Now back to the story at hand. I said that particles are indistinguishable and so when we swap them this should have no observable consequence. And I said that all fundamental particles and most quasiparticles did this by multiplying the wave function by plus or minus 1. But you might ask why isn’t it possible that we can multiply the wave function by some other phase: say [tex]$e^{i theta}$[/tex]? Well, we know that swapping clockwise around our circle and counterclockwise around our circle are inverse operations of each other. Thus if going clockwise around our circle gave us a phase of [tex]$e^{itheta}$[/tex] then going counterclockwise around our circle should give us a phase of [tex]$e^{-itheta}$[/tex]. But in three spatial dimensions we saw that these two processes were topologically equivalent. This means that [tex]$e^{i theta}=e^{-i theta}[/tex]. But this is only true of [tex]$theta$[/tex] is an integer multiple of pi. Indeed we see that in three spatial dimensions the phase can only be plus or minus one when we swap these particles (there is also the possibility of parastatistics where one uses an internal degree of freedom and obtains higher dimensional representations of the symmetric group. But there is a way to make particles which obey parastatistics to look like they are composites made up of fermions or bosons and hence there is a good reason to say that in three spatial dimensions there are only fermions and bosons.) Now what is cool is that as we argued above, the argument we gave above falls appart for particles in two spatial dimensions. The clockwise and the counterclockwise swap are different in two spatial dimensions, so there is no requirement that [tex]$e^{i theta}=e^{-i theta}[/tex].
This leads one to the postulate that in two spatial dimensions, particles can obey statistics where swapping the particles results in multiplying the wave function by a phase factor [tex]$e^{i theta}$[/tex] where [tex]$theta$[/tex] is not an integer multiple of pi. Such particles are called anyons (the name was coined by Frank Wilczek in 1982. For an interesting interview of Wilczek and anyons see here) Of course we don’t live in a world with two spatial dimensions (we are not Flatlanders, eh?) so one might think that the possibility of anyons existing is nill. Oh well, interesting theory, but no practical applications. Right? Nope.
Remember that we also know that in many body systems there are quasiparticles which act very much like normal particles. And in many body systems we can imagine confining these systems such that they are (effectively) two dimensional. One way to do this is to use semiconductor technology to create very thin flat interfaces of differing materials. Then, by applying an electric field perpendicular to these layers, you create a potential well which can confine electrons along this perpendicular direction. If you cool the system down, then only the lowest energy level of this perpendicular direction will be occupied and the electron(s) will behave as two dimensional objects. So in such systems we might hope that there are quasiparticles which exhibit anyonic properties. Do such systems exist? To answer this question we need another detour. This time we need a detour into the quantum Hall effect.
The Hall effect is a classical effect which you learn about when you first learn classical electrodynamics. Suppose you apply a voltage across two edges of plate. Current will move between these two edges. Now if we apply a magnetic field perpendicular to this plate, the electrons moving along this one direction will experience a Lorentz force in the plate perpendicular to the current. This results in a voltage drop across the plate perpendicular to the current direction. This is the Hall effect. Equilbrium for this setup will occur when the charge buildup on the edges parallel to the current produces an electric field which exactly balances the applied magnetic field. A simple calculation shows that the Hall conductance for this setup is [tex]{ne over B}[/tex] where [tex]$n$[/tex] is the number density for the current carriers, [tex]$e$[/tex] is the electric charge, and [tex]$B$[/tex] is the magnetic field strength. The resistivity, which is the inverse of the conductance, in the Hall effect varies linearly with the applied magnetic field.
In 1980 von Klitzing discovered that if one took one of the two dimensional quantum systems described above, applied a strong magnetic field (a few Tesla) and cooled the system to a few Kelvin, then the Hall resistence no longer varies linearly with the applied magnetic field. In fact the resistence showed a series of steps as a function of the strength of the applied magnetic field. This effect is known as the integer quantum Hall effect. How to explain this effect? Well when we apply a perpendicular magnetic field to our two dimensional system, this results in a change in the energy levels of the system. In particular what happens is that instead of having a continuous set of allowed energy levels for the electrong gas, the levels are now quantized into different discrete (highly degenerate) energy levels. These energy levels are seperated by an amount of energy given by Planck’s constant times the cyclotron frequency of the electrons (the cyclotron frequence is [tex]$omega_C={eB over m}$[/tex], where [tex]$m$[/tex] is the (reduced) mass of the charge carrier. This is the frequency which an electron will circle at in an applied magnetic field.). Now recall than in an electron gas at zero temperature, we fill up the energy levels up to the Fermi energy. But now apply a perpendicular magnetic field, and the different energy levels (called Landau levels) will fill up. But there will be cases where the Fermi energy lies in a gap between the Landau levels. Thus in order for electrons to scatter out of the filled energy levels, they must overcome this energy gap. But at low temperatures they can’t do this and so there is no scattering. Varying the magnetic field moves the spacing between the energy levels. Thus over a range of values where the Fermi energy is in between the Landau levels the Hall resistance will not change. This is the origin of those plataues observed by von Klitzing. We can define a quantity, called the filling factor which tells us how full each Landau level is. At integer values of the filling factor we will observe the effects of the gapped Landau level. This is why it is called the integer quantum Hall effect.
Continuing with our story, in 1982, Stormer and Tsui performed a cleaner version of von Klitzing’s experiment and observed that not only is their a quantum Hall effect for integer values of the filling factor, but that there is also a quantum Hall effect for fractional values of the filling factor. Amazingly these were at very simple integer fractions like 1/3, 1/5, 2/5, etc. For the integer quantum Hall effect, we are essentially dealing with a theory with quasiparticles which are weakly interacting. But for the fraction quantum Hall effect, it was soon realized that the effect must arise from an effect of strongly interacting particles. In 1983 Robert Laughlin put forth a theory to explain the fractional quantum Hall effect by introducing an famous anstatz for the wavefunction of this system which could succesfully explain the plateaus in the fractional quantum Hall effect (well, further modifications were needed for higher filling factor effects.) Now what is interesting, and getting back to our main story, is that this wavefunction has quasiparticle excitations which are anyons! In fact, these excitations would not only behave like they had anyonic statistics, but would also behave like they had fractional values of their charge.
Now the question arises, well the theory explaining the fractional quantum Hall effect has anyonic quasiparticles, but has the effect of these fractional statistics ever been observed. Well there were early experiments which were consistent with such an interpretation, but a really convincing experiment which would directly verify the fractional statistics has never been performed. That is until recently. In Realization of a Laughlin quasiparticle interferometer: Observation of fractional statistics by F. E. Camino, Wei Zhou, and V. J. Goldman (all from Stony Brook University) the authors desribe an experiment in which a very cool interferometer experiment is used to directly verify the fractional statistics of the quasiparticle excitations in the fractional quantum Hall effect! This is very cool: the direct observation of fractional statistics!
To me, this whole story, from the theoretical ideas of differing statistics in two dimensions, to the physics of many body strongly interacting systems, and finally to the design of clever, hard experiments to verify the theory behind this strange physics, is one of the most beautiful results in modern physics. Interesting it is also possible that there are anyons which obey nonabelian statistics. This means (roughly) that the wavefunction is not multiplied by a phase under exchange (which is a process which commutes for all such exchanges), but instead another degree of freedom is multiplied by a matrix (so that the noncommutative nature of the braid group is directly realized.) There are some theories of the fractional quantum Hall effect which suggest that these particles might be nonabelian anyons. A complete discussion of nonabelian anyons would lead us on another fascinating story (see John Preskill’s notes on topological quantum computing for an awesome introduction to this subject.) But what is even cooler to ponder is that the experiment preformed in the above article is bringing us one step closer to the possibility that even these strange particles with even stranger statistics may be tested in a similar beautiful experiment. Now that would be an awesome experiment!

Ranking By Wins

Yesterday I attend a talk by local CS graduate student Atri Rudra describing work he did with Don Coppersmith (IDA, Princeton) and Lisa Fleischer (IBM, NY) where he described their work on rankings for weighted tournaments (see here and here.) The result is pretty neat, so I thought I’d describe it here.
Suppose you are have a tournament where every player plays every other player exactly one time (the results are more general than this, but let’s keep it simple) and there are no ties. From this tournament, you’d like to develop a ranking of the players which most respects who beat who. In other words, you’d like to rank the players from first place to last place in such a way that you minimize the number of times in your ranking that a lower placed person beats a higher placed person. Now, it turns out that this problem, finding the optimal such ranking, is NP-hard (this was recently shown by Ailon, Charikar, and Newman) I.e. as our tournaments get bigger and bigger we have an ice cube’s chance in hell of finding this optimal ranking before we die 😉
So what do you do in the real world? Well a simple strategy you might take is to give a ranking according simply to the number of times each player has won. Thus the first ranked person is the person with the most wins, the second player has the second most wins, etc, and we break tries in some arbitrary manner. A natural question to ask, then, is how close to the optimum ranking does a ranking by this simple strategy get? Let’s measure our rankings by counting the number of times that our ranking fails (i.e. the number of times in our ranking that the lower ranked person beat a higher ranked person.) Then, there is a value for this quantity for the optimal such ranking and for a ranking obtained by the simple strategy of ranking by number of wins. What is shown in the paper by Coppersmith, Fleischer, and Rudra is that the simple raning by the number of wins has at most five times as many failings as the optimal ranking. This is rather amazing, in my simple mind: we get within a factor of five (at worst) by simply using a ranking which orders by number of wins (with ties broken arbitrarily.) In fact, what is also shown by these authors is that it doesn’t matter how you break ties: it will always be the case that the worse case is within a factor of five.
Kind of neat, no? A natural question, of course, is if there are strategies which can be efficiently computed but which beat this factor of five bound. Indeed! It turns out that there are algorithms which give only factors of three. These algorithms, however, are more complicated. What is so nice is that the simplest ranking you could come up with, has such a nice approximation bound!

Blinded by Science

Scott Aaronson over at Shtetl-Optimized asks whether mathematics or theory of computer science are actually “science.” My gut reaction to a question like this is just to avoid it: who cares whether math and TCS is science, what is important is that (some) math and (some) computer science are either (1) important to the progress of science and (2) important for practical reasons which we don’t classify as science (for example, as relating to technology.) But I guess this just explains why I’m not a pure mathematician (beside the fact that I don’t have the brains!): a connection to experiment or a connection to technology are important prerequisites for what I find important (note that this is different from what I find interesting.) Of course, I put mathematicians who do their work solely for the beauty of the work into the same category I put other, more traditional, artists, so what do I know?

Wasting Your Time

Leslie points me to the Grand Illusions Shop, which has some pretty awesome stuff. I especially like the non-transitive dice game:

Ask your opponent to select any one of the four dice. You select another and both dice are thrown, at the same time, a predetermined number of times, to see who gets the highest number on each throw, and hence wins that throw. In a game of ‘The Best of Ten Throws’ you will almost always have more wins. Invite the player to choose another die – perhaps your ‘winner’ – leaving you to select another for yourself and play again. Again you will win. Whichever die your opponent selects, your choice, in a longer run of ten or more throws, will always win.

I'm Right

OK, that whole left brain, right brain thing has pretty much no scientific support. However, I could’t resist taking a test to determine which of the stereotypes I fit:

Brain Lateralization Test Results
Right Brain (64%) The right hemisphere is the visual, figurative, artistic, and intuitive side of the brain.
Left Brain (30%) The left hemisphere is the logical, articulate, assertive, and practical side of the brain

Are You Right or Left Brained?
personality tests by similarminds.com

Does this disqualify me from being a scientist?