Scott Aaronson has written a nice article “Ten Semi-Grand Challenges for Quantum Computing Theory”. If you are a computer science theory researcher interested in what to work on in quantum computing, I highly recommend the list. One thing I find very interesting about theory work in computer science is how religious researchers are about not sharing the problems they are working on. So it is very nice of Scott to share what he thinks are big open problems in quantum computing theory today.
One thing that Scott leaves out of the list, which I would have included, are questions along the lines of “how can quantum computing theory contribute to classical computing theory.” Scott explicitly says he does not include this question because it is “completely inexhaustible,” and I agree that this is certainly true, but this may be exactly the reason one should work on it! The idea behind this line of research is to prove results in classical computational theory by insights gained from quantum computational theory. An analogy which may or may not be stretching things a bit is the relationship between real analysis and complex analysis. Anyone who has studied these two subject knows that real analysis is much more difficult than complex analysis. Physicists best know this in that they often cannot easily do certain real integrals unless they pretend their real variables are complex and integrate along a particularly well choosen countour. Similarly, many results in real analysis have counterparts in complex analysis which are easy to prove. So the line of research which asks whether quantum computation can contribute to classical computation is basically “as complex analysis is to real analysis, so quantum computing is to classical computer.” I’ve discussed this possibility (along with Scott’s contribution to it) here.
Best Title Ever? A New Nomination
On the quant-ph arXiv, today, we find, Steven van Enk having way too much fun:
Quantum Physics, abstract
quant-ph/0507189From: Steven J. van Enk [view email] Date: Tue, 19 Jul 2005 19:10:35 GMT (2kb)|0>|1>+|1>|0>
Authors:
S.J. van Enk
Comments: 1.1 page, unnormalized title
is entangled. (There is nothing more in the abstract, but quant-ph does not
accept 2-word abstracts, apparently.)Full-text: PostScript, PDF, or Other formats
Erasing Landauer's principle,
Three Toed Sloth (who has been attending the complex systems summer school in China which I was supposed to attend before my life turned upside down and I ran off to Seattle) has an interesting post on Landauer’s principle. Landauer’s principle is roughly the principle that erasing information in thermodynamics disipates an amount of entropy equal to Bolztman’s constant times the number of bits erased. Cosma points to two papers, Orly Shenker’s “Logic and Entropy”, and John Norton’s “Eaters of the Lotus”, which both claim problems with Landaur’s principle. On the bus home I had a chance to read both of these papers, and at least get an idea of what the arguments are. Actually both articles point towards the same problem.
Here is a simplistic approach to Landaur’s principles. Suppose you have a bit which has two values of a macroscopic property which we call 0 and 1. Also suppose that there are other degrees of freedom for this bit (like, say, the pressure of whatever is physically representing the bit). Now make up a phase space with one axis representing the 0 and 1 variables and another axis representing these degrees of freedom. Actually lets fix this extenral degree of freedom to be the pressure, just to make notation easier. Imagine now the process which causes erasure. Such a process will take 0 to 0, say, and 1 to 0. Now look at this processs in phase space. Remember that phase space volumes must be conserved. Examine now two phase space volumes. One corresponds to the bit being 0 and some range of the pressure. The other corresponds to the bit being 1 and this same range of pressures. In the erasure procedure, we take 1 to 0, but now, because phase space volume must be preserved, we necesarily must change the values of the extra degree of freedom (the pressure), because we can’t map the 1 plus range of pressures region to the 0 plus the same range of pressures because this latter bit of phase space is already used up. What this necesitates is an increase of entropy, which at its smallest can be k ln 2.
From my quick reading of these articles, their issue is not so much with this argument, per se, but with the interpretation of this argument (by which I mean they do not challenge the logical consistency of Laundauer and other’s formulations of the principle, but challenge instead the interpretation of the problem these authors claim to be solving.) In both articles we find the authors particularly concerned with how to treat the macroscopic variables corresponding to the bits 0 and 1. In particular they argue that implicit in the above type argument is that we should not treat these macroscopic variables as thermodynamic-physical magnitudes. The author of the first paper makes this explicilty clear by replacing the phase space picture I’ve presented above by two pictures, one in which the bit of information is 0 and one in which the bit of information is 1 and stating things like “A memory cell that – possibly unknown to us – started out in macrostate 0 will never be in macrostate 1″ (emphasis the authors.) The authors of the second article make a similar point, in particular pointing out that “the collection of cells carrying random data is being treated illicitly as a canonical ensemble.”
What do I make of all this? Well I’m certainly no expert. But it seems to me that these arguments center upon some very deep and interesting problems in the interpretation of thermodynamics, and also, I would like to argue, upon the fact that thermodynamics is not complete (this may even be as heretical as my statement that thermodynamics is not universal, perhaps it is even more heretical!) What do I mean by this? Consider, for example, one of our classical examples of memory, the two or greater dimensional ferromagnetic Ising model. In such a model we have a bunch of spins on a lattice with interactions between nearest neighbors which have lower energy when the spins are aligned. In the classical thermodynamics of this system, above a certain critical temperature, in thermodynamic equibrium, the total magnetization of this system is zero. Below this temperature, however, something funny happens. Two thermodyanmic equilibrium states appear, one with the magnetization pointing mostly in one direction and one with the magnetization point mostly in another direction. These are the two states into which we “store” information. But, when you think about what is going on here, this bifurcation into two equibria, you might wonder about the “completeness” of thermodynamics. Thermodynamics does not tell us which of these states is occupied, nor even that, say each are occupied with equal probability. Thermodynamics does not give us the answer to a very interesting question, what probability distribution for the bit of stored information!
And it’s exactly this question to which the argument about Landauer’s principle resolves. Suppose you decide that for the quantities, such as the total magnetic field, you treat these as two totally separate settings with totally different phase spaces which cannot be accessed at the same time. Then you are lead to the objections to Landauer’s principle sketched in the two papers. But now suppose that you take the point of view that thermodynamics should be completed in some way such that it takes into account these two macroscopic variables as real thermodynamic physical variables. How to do this? The point, I think many physicist would make, it seems, is that no matter how you do this, once you’ve got them into the phase space, the argument presented above will procedure a Landauer’s principle type argument. Certainly one way to do this is to assert that we don’t know which of the states the system is in (0 or 1), so we should assign these each equal probability, but the point is that whatever probability assumption you make, you end up with a similar argument. in terms of phase space volume. Notice also that really to make these volumes, the macroscopic variables should have some “spread”: i.e. what we call 0 and 1 are never precisely 0 and 1, but instead are some region around magnetization all pointing in one direction and some region around magnetization pointing in another direction.
I really like the objections raised in these articles. But I’m not convinced that either side has won this battle. One interesting thing which I note is that the argument against Laundauer’s principle treats the two macrostates 0 and 1 in a very “error-free” manner. That is to say they treat these variables are really digital values. But (one last heresy!) I’m inclided to believe that nothing is perfectly digital. The digital nature of information in the physical world is an amazingly good approximation for computers….but it does fail. If you were able to precisely measure the information stored on your hard drive, you would not find zeros and ones, but instead zeros plus some small fluctuation and ones plus some small fluctuations. Plus, if there is ever an outside environment which is influencing the variable you are measuring, then it is certainly true that eventually your information, in thermodynamics, will disappear (see my previous article on Toom’s rule for hints as to why this should be so.) So in that case, the claim that these two bit states should never be accessible to each other, clearly breaks down. So I’m a bit worried (1) about the arguments against Laundauer’s principle from the point of view that digital information is only an approximation, but also (2) about arguements for Laundauer’s principle and the fact that they might somehow depend on how one completes thermodynamics to talk about multiple eqiulibria.
Of course, there is also the question of how all this works for quantum systems. But then we’d have to get into what quantum thermodynamics means, and well, that’s a battle for another day!
Update: be sure to read Cris Moore’s take on these two papers in the comment section. One thing I didn’t talk about was the example Shenker used against Laundauer’s principle. This was mostly because I didn’t understand it well enough and reading Cris’s comments, I agree with him that this counterexample seems to have problems.
Teaching Myself to Teach
A confession. When I was an undergraduate, I really didn’t go to classes as much as I should have. In particular, I didn’t go to many of my physics and math classes. Why? Mostly because these were the classes where I had the least problem picking up the material and my own self-motivation to learn the material on my own was almost always enough to carry me through the class. So the classes I attended the most were my humanities classes (to get that valuable B.S. in literature 😉 ) and my elective scientific courses, particularly the courses I took in computational neural science. Now that I’ve started teaching my own class, I wonder how much my own teaching suffers because of my past habits skipping class?
One thing I’ve noticed is that I have a hard time lecturing around a textbook. I have this insane desire to explain the subject material in my own words, and following a textbook closely makes me feel like I am not much more than a glorified parrot. I think, perhaps, this just has to do with my own proximity to the subject matter I’m teaching, quantum computing. Teaching about a subject which is not your own field of research in some ways seems like it would be easier, because you feel less of a desire to make sure you get it just right.
I am also fighting, in my own teaching, the three subject structure which was prevalent at Caltech. The three subject structure was the expectation that a course would involve lectures on one part of the subject, homeworks on another, and tests on a third part. Of course there was always some overlap, but there was also a lot of “thrown them into the fire” homeworks and tests. I can’t count the number of times I realized halfway through a test, “Oh, so this is what that means!” Fine for self-motivated self-learners, but not the best for everyone else.
Anyway, I think I’m slowly beginning to learn to teach. The first lecture for the course, I, uh, how to put this nicely, well I totally misjudge the appropriate difficulty level I should have been shooting for in the classs. In particular, I’m teaching the course to professional master’s students in computer science, some of whom have been out of school for a few years, and so, while I may want to teach them quantum theory in one lecture, this just is not very realistic! So starting in lecture two, we slowed down the pace quite a bit. Now entering into the fifth lecture, we’re beginning to get to things which I would consider truely quantum computing subjects. Now the challenge will be, again, to restrain myself from running rampshot through this material. Certainly there last homework was significantly harder than their first two, so I’m really trying to slow the pace as we enter the really cools stuff in quantum computing. Interestingly, slowing down has resulted in, I think, teaching the material in a careful manner, but has also kept me from teaching the big picture as effectively as possible. Mixing these two styles, big picture and instruction on the details, is something I’m working on.
Well, back to working on my lecture and the next homework set!
Happy Cow Day!
Which is more disturbing, the fact that today is “Cow Appreciation Day” (and here I thought July 14th was just “Recover from Bastille Day Day”) or the fact that “Chick-fil-A” will award a free combo meal to anyone who comes to one of their 1200 plus restraunts on Cow Appreciation Day “fully dressed as a cow”
The Case of the Many Worlds Theologian
Given the title of this blog, you can be sure that the moment I saw the article The Curious Case Of The Quantum Cardinal by Rupert Goodwins, I couldn’t resist reading it. I must say that, while I liked the title, the article did make me cringe more than a few times in it’s description of quantum theory and quantum computing, but, as I’ve mentioned before, I have this reaction to a lot of popular science writing these days. What I did find interesting, however, was the claim that quantum computing is a direct challenge to the dogma of the Roman Catholic church. Whah? Quantum information scientists are the Galileo’s of our modern times?
But of course, this should have been obvious. Suppose, just for fun, that you are a believer in the many worlds interpretation of quantum theory (caveat: I’m giving a rather flippant version of the interpretation here. This is not meant to offend, but only as an exercise in a certain form of reasoning which I can only describe as “newspaper article writer reasoning.”) In the many worlds interpretation, one believes that the different components of a quantum superposition are actually different universes, and that while sometimes these universes can merge back together (when quantum interference occurs, for example) most of the time these universe branch and then are totally inaccessible to us for the rest of time. (OK, I agree this sounds really strange, and well, all I can only say: “yep, pretty strange.”) Now think about this from a theological perspective. Certainly one would expect that there are many universes out there in which Jesus did not lead the life he lead as described in the New Testament. One would even expect that there are many universes out there in which Jesus did not even exist. This may or may not trouble Catholic theologians, but I can at least see that it must at least cause a bit of consternation. Somehow this whole thing reminds me of this webpage.
Can Good News Cure a Cold?
First the bad news. The bad news is that I am fighting a really nasty cold. I can hear a little more out of my right ear than yesterday, but I’m still pretty plugged up. Anyone who knows any home remedies that don’t involve a hot potato, feel free to advise 😉
The good news is that today we recieved notification that a NSF grant which I had applied for with Mark Oskin has been officially awarded. This is the first grant I’ve applied for, and I’m just going to sit here and enjoy the feeling of batting one thousand, because I know I’m lucky and I know that the long hard slog for grants is destined to smash my ego multiple times in the future. As many of you may know, I have been extremely lucky to find a position here at the University of Washington, during a time when my family has been undergoing quite a lot of, how shall I put it, difficult transitions. In many ways, the computer science and engineering department here was taking quite a gamble on me, and I can only begin to express how appreciative I am for what they have done for me. The awarding of this grant is, then, I hope, the beginning of their gamble paying off. It will certainly mean that I have a more certain future. Now if only this good news could cure my cold.
Toom's Rule, Thermodynamics, and Equilbrium of Histories
My recent post on thermodynamics and computation reminded me of a very nice article by Geoffrey Grinstein that I read a while back.
Suppose we are trying to store digital information into some macroscopic degree of freedom of some large system. Because we desire to store digital information, our system should have differing phases corresponding to the differing values of the information. For example, consider the Ising model in two or greater dimensions. In this case the macroscopic degree of freedom over which we wish to store our information is the total magentization. In order to store information, we desire that the magnetization come in, say, two phases, one corresponding to the system with positive total magnetization and the other corresponding to negative total magnetization.
Assume, now that the system is in thermal equilbrium. Suppose further that there are some other external variables for the system which you can adjust. For the example of the Ising model, one of these variables could be the applied external magnetic field. Since the system is in thermal equilbrium, each of the phases will have a free energy. Now, since we want our information to be stored in some sort of robust manner, we don’t want either of the phases to have a lower free energy, since if it did, the system would always revert to the phase with the lowest free energy and this would destroy our stored information. Since we require the free energy of all information storing phases to be equal, this means that we can always solve these equality equations for some of the external variables. This means that if we plot out the phases as a function of the external variables, we will always end up with coexisting phases along surfaces of dimension less than the number of external variables. For our example of the Ising model in an external magnetic field, what happens is that the two phases (positive and negative total magnetization) only coexist where the magnetic field equals zero. If you have any magnetic field in the positive magnetic direction, then the thermodynamical phase which exists in equibrium is the phase with the postivie total magnetization. So coexistence of phases, and in particular of information storing phases, in the external variable space, is always given by a surface of dimension less than the number of external variables
What is interesting, and why this gets connected with my previous post, is Toom’s rule. Toom’s rule is two dimensional cellular automata rule which exhibits some very interesting properties. Imgaine that you have a two dimensional square lattice of sites with classical spins (i.e. +1 and -1) on each of the lattice sites. Toom’s rule says that the next state of one of these spins is specified by the state of the spin, its neighbor to the north, and its neighbor to the east. The rule is that the new state is the majority vote of these three spins (i.e. if the site has spin +1, north has spin -1, and east has spin -1, the new state will be spin -1.)
Toom’s rule is interesting because it exhibits robustness to noise. Suppose that at each time step, the cellular automata instead of performing the correct update, with some probability the site gets randomized. What Toom was able to show was that for the Toom update rule, if this probability of noise is small enough, then if we start our system with a positive magnetization (just like the Ising model, we define this as the sum of all the spin values) then our system will remain with a postive magnetization and if we start our system with a negative magnetization it will similarly retain its magnetization. Thus Toom showed that the cellular automata can serve, like the two dimensional Ising model at zero applied field, as a robust memory.
But what is nice about Toom’s rule is that it gives an even stronger form of robustness. Remember I said that the noise model was to randomize a single site. Here I meant that the site is restored to the +1 state with 50% probability and the -1 state with 50% probability. But what if there is a bias in this restoration. From the Ising model point of view, this actually corresponds to applying an external magnetic field. And here is what is interesting: for Toom’s rule the region where the two phases which store information can coexist is not just at the (effectively) external magnetic field equal zero point, but instead is a region of external magnetic field between some positive and negative value (set by the probability of noise.) So it seems that Toom’s rule violates the laws of thermodynamics!
The solution to this problem is to realize that the probability distribution produced by Toom’s rule is not given by a thermodynamic Boltzman distribution! Toom’s rule is an example of a probabilistic cellular automata whose steady state is not described by classical thermodynamics. This is exactly one of the models I have in mind when arguing that I do not know whether the eventual state of the universe is going to be in Gibbs-Boltzman thermodynamic equibrium.
Along these lines, Charlie Bennett and Geoffrey Grinstein, have, however, shown that while the steady state of Toom’s rule is not given by a Gibbs-Boltzman thermodyanmic distribution, if one considers the histories of the state of the cellular automata, instead of the state itself, then Toom’s rule is given by a Boltzman distribution over the histories of the cellular automata. It’s at this point that my brain just sort of explodes. That a system’s histories are in equibrium is very strange: normally we think about equibria being generated in time, but here we’ve already used up our time variable! I suspect that the answer to this puzzle can be achieved by refering to the Jaynes approach to entropy, but I’ve never seen this done.
Back!
Posting has been nonexistent because I’ve been traveling. I’m now back from giving a lecture at the 4th biannual SQuInT student retreat, held at USC and organized by Todd Brun. I’ve been lucky enough to attend all four such retreats, once as a student, and three times as a lecturer (once as a graduate student, once as a postdoc, and once as whatever it is I am now 😉 .) This year I got the opportunity to lecture on quantum algorithms. I’ve put the contents of my slides online here.
The student retreat was a lot of fun, including a trip to the King Tut exhibit at LACMA. Unfortunately, my enjoyment was tempered by the nasty nasty cold I’ve come down with. I can’t hear anything out of my right ear. Bah!
SQuInT, by the way, stands for Soutwest Quantum Information and Technology Network. Yeah, someone must have been smoking something when they thought that one up 😉
A Trivia Puzzle
Lately I’ve been playing trivia at a local pub (which I highly recommend.) One of the categories they often use is a “decade” category. The answer to all of the clues are years in a particular decade (say, for example, the 80s.) Each year is used exactly one time and there are ten clues. Suppose you randomly fill in the years as answers to the ten clues, respecting the condition that every year must be used exactly once. What is the expected number of clues you will get correct?