Quantum computation was put on the map, so to speak, with Peter Shor’s discovery in 1994 that quantum computers could efficiently factor numbers. It is interesting to contemplate how this discovery will be viewed in our distant future. We are now over ten years out from Shor’s discovery. Perhaps this is too short of time to make a judgement, put let’s be arrogant and try to make such a historical judgement. Along these lines, I’d claim that Shor’s algorithm is one of the most significant discoveries about algorithms in modern history. There are a certain group of algorithms, like Euclid’s algorithm for computing a greatest common denomenator, which, in my mind are among the most beautiful, eternal algorithms which we know. (Algorithms from the code book, so to speak.) I would like to make the claim that Shor’s algorithm belongs in the same category as these algorithms. First of all Shor’s algorithm solves a problem, factoring, which feels like one of those problems which lies very close to the base of mathematics. Certainly most of us learned about factoring numbers at a very young age, and indeed the view of a number as a product of its factors shapes in a deep way the manner in which we think about integers. Second, Shor’s algorithm use of period finding reveals in a deep way how quantum theory changes the way we can process information. That global properties of symmetric quantum states can be extracted from quantum systems, but not for their equivalent classical systems, is a deep observation and one which we are only now beginning to push in new directions.
So, I’ll claim, Shor’s algorithm is a very profound discovery which will be thought of for centuries to come as one of our humanities most beautiful discoveries. Interestingly, I think, this also puts a huge cloud over those of us working to try to discover new quantum algorithms. For instance, Sean Hallgren’s discovery that quantum computers can efficiently solve Pell’s equation is an awesome result, but viewed in light of factoring, well it’s just hard to compete! Certainly one effect of this is that looking for new quantum algortihms has gained a reputation as being a very difficult field. And indeed, if you define difficult as coming up with something which is more deep that Shor’s algorithm, then working towards new quantum algorithms can seem a hopeless task. Indeed this is one of ther reasons there has been so much focus on the hidden subgroup problem: an efficient algorithm for this problem would lead to efficient algorithms for the graph isomorphism problem and certain k-shortest vector in a lattice problems, and these are big problems, whose solution would represent big progress in algorithms. So we in quantum algorithms, set our sights extermely high, because Shor set a bar which is at world-record pole-vault heights. Certainly this point of view argues for working on much smaller progress in quantum algorithms: understanding more basic simple primitives even if they don’t lead to algorithms which are “better than Shor.” I, myself, struggle with thinking along these lines: our expectations are high and it is hard to not try to jump over the Shor barrier. But perhaps those with better self control can move in these directions and maybe, if we are lucky this progress will eventually show us new ways that quantum computers can outperform they nasty rivals, classical computers.
Edgy Deutsch
Via Geomblog, I see that David Deutsch has won the 2005 $100,000 Edge of Computation Science Prize (see the post here.) Here is the award citation:
Although the general idea of a quantum computer had been proposed earlier by Richard Feynman, in 1985 David Deutsch wrote the key paper which proposed the idea of a quantum computer and initiated the study of how to make one. Since then he has continued to be a pioneer and a leader in a rapidly growing field that is now called quantum information science.
Presently, small quantum computers are operating in laboratories around the world, and the race is on to find a scalable implementation that, if successful, will revolutionize the technologies of computation and communications. It is fair to say that no one deserves recognition for the growing success of this field more than Deutsch, for his ongoing work as well as for his founding paper. Among his key contributions in the last ten years are a paper with Ekert and Jozsa on quantum logic gates, and a proof of universality in quantum computation, with Barenco and Ekert (both in 1995).
One reason to nominate Deutsch for this prize is that he has always aimed to expand our understanding of the notion of computation in the context of the deepest questions in the foundations of mathematics and physics. Thus, his pioneering work in 1985 was motivated by interest in the Church-Turing thesis. Much of his recent work is motivated by his interest in the foundations of quantum mechanics, as we see from his 1997 book.
Congrats to David Deutsch!
Ranking By Wins
Yesterday I attend a talk by local CS graduate student Atri Rudra describing work he did with Don Coppersmith (IDA, Princeton) and Lisa Fleischer (IBM, NY) where he described their work on rankings for weighted tournaments (see here and here.) The result is pretty neat, so I thought I’d describe it here.
Suppose you are have a tournament where every player plays every other player exactly one time (the results are more general than this, but let’s keep it simple) and there are no ties. From this tournament, you’d like to develop a ranking of the players which most respects who beat who. In other words, you’d like to rank the players from first place to last place in such a way that you minimize the number of times in your ranking that a lower placed person beats a higher placed person. Now, it turns out that this problem, finding the optimal such ranking, is NP-hard (this was recently shown by Ailon, Charikar, and Newman) I.e. as our tournaments get bigger and bigger we have an ice cube’s chance in hell of finding this optimal ranking before we die 😉
So what do you do in the real world? Well a simple strategy you might take is to give a ranking according simply to the number of times each player has won. Thus the first ranked person is the person with the most wins, the second player has the second most wins, etc, and we break tries in some arbitrary manner. A natural question to ask, then, is how close to the optimum ranking does a ranking by this simple strategy get? Let’s measure our rankings by counting the number of times that our ranking fails (i.e. the number of times in our ranking that the lower ranked person beat a higher ranked person.) Then, there is a value for this quantity for the optimal such ranking and for a ranking obtained by the simple strategy of ranking by number of wins. What is shown in the paper by Coppersmith, Fleischer, and Rudra is that the simple raning by the number of wins has at most five times as many failings as the optimal ranking. This is rather amazing, in my simple mind: we get within a factor of five (at worst) by simply using a ranking which orders by number of wins (with ties broken arbitrarily.) In fact, what is also shown by these authors is that it doesn’t matter how you break ties: it will always be the case that the worse case is within a factor of five.
Kind of neat, no? A natural question, of course, is if there are strategies which can be efficiently computed but which beat this factor of five bound. Indeed! It turns out that there are algorithms which give only factors of three. These algorithms, however, are more complicated. What is so nice is that the simplest ranking you could come up with, has such a nice approximation bound!
Qubiting for Dollars
Rod Van Meter makes the prediction that the first production quantum computer will cost forty bucks a qubit. He arrives at this number by assuming the first application will be factoring a 1,024 bit number which requires about five kilobits (kiloqubits?) and adds in a factor of fifty for quantum error correction. Thus there is a total of one quarter of a million qubits and figures such a machine could cost around ten million for the figure of forty bucks a qubit. It’s interesting to reason about this by first setting the cost instead of by estimating the actual costs. Why I like this approach is that if we suppose that a similar quantum computer costs ten times as much, then we simply won’t be building such a computer. Of course, I’m not sure how much the NSA would pay for a quantum computer: it may indeed be in the hundred million dollar range if factors 1,024 bit numbers.
Of course, I don’t believe in quantum error correction…or rather I don’t believe we will use the standard approach to quantum error correcting to obtain a fault-tolerant quantum computer 😉 . Thus I’m betting the first quantum computer will cost around a dollar a qubit (although I certainly think some of these “qubits” may in fact be made up of hundreds to thousands to millions to 10^19 number of single quantum systems.)
It will be interesting to see how far off Rod and I are at date Q. I suspect Rod will be closer mostly because he’s actually worked in the real world.
Update: For comparison the ENIAC cost about half a million dollars in 1945 which is about five million dollars in today’s money. The total number of vacuum tubes in that monster was around 20,000. Thats 500 of today’s dolars per vacuum tube. And, of course, today, I can buy a computer with 50 million more than a billion transistors for under five hundred bucks!
Computation….On the Edge
Via Three Quark’s Daily (Update: and also at Geomblog, which I somehow missed reading, but which has some great comments about quantum information science): The Edge (which you either love or hate) has announced a $100,000 prize in computation science The list of nominations for the first prize was announced at Festival della Scienza in Genoa on Novermber 1st.
The list of those nominated is a pretty interesting. From quantum computing land, those nominated are
CHARLES H. BENNETT, for his many ongoing contributions to the physics of information, including reversible computation, quantum cryptography, quantum teleportation, and quantum communication theory.
DAVID DEUTSCH, for the enormous potential of quantum computing in studying the architecture of the brain.
SETH LLOYD, for turning quantum computers from dream into device.
PETER SHOR, for his discovery of revolutionary algorithms for quantum computation, which will hasten the day when this fundamentally new mode of computation becomes practicable.
That nomination for David Deutsch seems a bit strange to me: it sounds like it was written for Roger Penrose! I would have nominated David Deutsch for fundamental work developing the idea of a quantum computer, realizing that quantum computers could outperform classical computers, and for fundamental insights connecting physics to the foundations of computer science.
Superpositions of Worms Going In and Out
Over at Fact and Fiction there is an interesting post about an article in New Scientist called Attack of the Quantum Worms.
Having not read the article, I thought I’d be an idiot and comment on what one could possibly mean by the statement that quantum computers are more susceptible to malicious attacks. I’m an even bigger idiot because I know very little about computer software security. But life’s about saying silly things and then getting chewed out by all you smart people, know?
The first thing one might mean is that quantum data cannot be backed up due to the no cloning theorem. OK, suppose I’m a hacker who wants to attack your quantum computer. Surely if I write some code which is executed by the computer and it damages the quantum state of the system, then, because I can’t back up the state of the quantum computer, I’m in big trouble. So it seems that one cannot prevent crashing a quantum computation by backing the state of the system up. But this isn’t quite right, is it? I mean suppose we have a classical computation and we make two backups of the state of the computation at any time. Then we can use this to test whether one of these states has been corrupted by majority voting. But we’ve learned that we can do exactly the same thing for quantum computers: we can detect if the quantum state of one of our copies has been corrupted by encoding into a quantum computer. Now we need to use more than three qubits per bit, but still, this doesn’t seem like such a big deal (from a theorist perspective 😉 ) Now I’d also like to say that I don’t think this is even the way classical computers protect themselves versus malicious software.
So what about attacking the “quantum software?” Well in a standard circuit model of quantum computation, the software is just a list of classical expressions. There is nothing quantum about the data describing a quantum computing program. So it seems that there can’t be any difference here between the quantum and classical world.
Oh well, I really should get a copy of the New Scientist and figure out if any of this rambling has anything to do with the article (on a similar note, I don’t understand the second part of the post at Fact and Fiction. I just don’t see the relevance of copying in a fixed basis: this is something which we almost always avoid in quantum computing because it destroys the coherence properties of the subsystem being so copied.)
Signaling and Information
One basic concept which has emerged in classical physics (and then moved on to the quantum world) is that it doesn’t appear possible to send information faster than the speed of light. What I find fascinating about this is that it seems to be, like Landaur’s principle, a statement which connects physics (special relativity, local field theory, quantum field theory, etc.) with, essentially, information theory (the concept of a signal, the concept of information capacity).
But now suppose, as I have argued before, that what makes a physical system a “storage” device is very much a matter of the physics of the device. Thus, for instance, I could try to encode information into the position of a particle on a line. Is this a good way to encode information? Well, certainly we could try to do this, but in the real world it will be very hard to achieve many orders of magnitude precision on this measurement because the system will interact with the rest of the world. While isolated we can talk about such an encoding, but as we crank up the real world meter, we find that there are limits on this encoding. Or to put it another way, the ability of the system to store information is a function of how it interacts with the rest of the world, a function of the physics of the system.
And if the ability of a system to store information is a function of the physics of the system, why isn’t the no-faster-than-light rule, a rule which is essentially about information transmission in physical systems, also a function of the physics of the system?
The Gospel of Theoretical Computer Science
I’m in Pittsburgh this weekend for FOCS 2005. This is the first FOCS I’ve attended.
This evening there was a panel discussion at the business meeting (free beer!) on “Exciting the public about (theoretical) Computer Science.” (Update: for a detailed description of the business meeting see Rocco Servedio’s guest post at Lance Fortnow’s blog and for a real computer scientist’s comments see here and for a nice collection comments head over to Geomblog. For a fare and balanced counter, see Pit of Bable) It was very interested to hear about the severity of the public image crisis the field of theoretical computer science currently finds itself facing. It is true that even within the broader computer science community, theory is oftentimes seen as not very important. But beyond this, the public’s perception of the theory of computation is very limited and a lot of the panel discussion focused on how to fix this. I mean, if I ask random people if they know anything in theoretical computer science, what are the chances that they will know anything? At best you might be very lucky and meet someone who has heard of the P versus NP question. On the other hand, mention physics to people, and they immediately think nuclear bombs, black holes, perhaps lasers and string theory, quantum mechanics, and atoms with electrons zipping around a nucleus (well I’m not saying any of these things are correct physics 😉 ) So how can theoretical computer science convey the excitement of the field to the general public?
Concrete suggestions like “publish in Science and Nature” and “get IEEE and ACM to hire someone to do PR” were offered up, and probably deserve serious consideration. It was interesting, to me, originally(?) a physicist, to hear how physics and astronomy were held up as prime examples of doing a good job conveying to the public the excitement of their research. It is true that physics and astronomy do a good job of conveying the excitement of the field, and there are various reasons for this.
But I’d like to focus on why theoretical physics has done a good job at exciting the public. Why? Because this is much closer to the theory of computation. Because lets face it: CS theory doesn’t have things dug up from the ground (dinosaurs, early primates, archeology), nor beautiful pictures of alien environments (planetary science, all of astronomy). And theoretical physics, like CS theory is hard. I mean really hard. And also, I would claim, theoretical physicis and CS theory share a very fundamental similarity in that they are both essentially about advanced creative problem solving.
So let’s look at theoretical physics. How is the excitement of theoretical physics conveyed? Well, the first things that probably pops into most peoples minds that is related to theoretical physics are Stephen Hawking’s “A Brief History of Time” or maybe Brian Green’s “The Elegant Universe.” Or maybe the recent NOVA program on “E=mc^2.” Now clearly neither of these books or the TV show is explaining to the public the “real” story of the theoretical physics. But what they do a good job of is convincing the audience that they, the audience, actually understand what is going on. As was mentioned on the panel tonight, people will hear about gravitons and think that they actually understand the physics of gravitons. But really, of course they don’t. Do they even know why exchange of particles can give rise to attractive forces or the connection of whether these forces are attractive or repulsive to the spin of the exchanged particle. I seriously doubt it. Yet, the authors of these books and TV shows have successfully given the audience the feeling that they do understand the field.
There are stories I have been told about Richard Feynman (OK, yeah, I just can’t resist another Feynman story) which said that when you went to one of his lectures, while you were listening to the lecture you would think “Yeah! This is great! Everything makes total sense now!” But when you left the lecture and tried to recall the reasoning and what Feynman was teaching, you couldn’t reproduce the results. I maintain that what was happening here is the same thing which good popular expositions on theoretical physics do: they convince the reader that they understand what is going on even though they most certainly do not. What is funny about the Feynman stories, of course, is that these are real physicists who themselves probably should understand the subject, and yet Feynman was able to convince them they understood what was going on, even though they really didn’t. Kind of scary.
So what lesson do I draw for this for the theory of CS? Well they need a Richard Feynman and a Stephen Hawking! But seriously, they need to attempt to convey their results in a way which, while not toally faithful to the science, gives the reader a reason to believe that they understand what is going on. This, of course, is much hard to do as a computer scientist than as a theoretical physicist because in the former rigor is held in higher esteme than in physics where hand-wavy arguments hold a much greater standing. But certainly even theoretical physicists (except the best) have to distort their understandings to meet the general public. So my advice to the theoretical computer science community is to let the rigor go but convey the spirit in a way that convince the public they understand what is going on.
Now one might argue that this is dishonest. And to the brighter readers it certainly is. But remember, it’s not the bright readers which are the main concern: they don’t consitute the public (if they did I wouldn’t be writing this post nor would FOCS be having this panel discussion. Nor would their be trials about whether intelligent design should be taught in science courses at the beginning of the twenty first century.)
Another interesting perspective coming from physics is that theoretical physics has conveyed to the public that it is really pursuing something fundamental. The two big fundamentals are “learning the laws of nature” and “understanding the origin of our universe.” CS theory hasn’t, in my opinion, exploited the fact that it is studying a fundamental question: the fundamental limits of computation. This fundamental research direction, to me, is as deep as understanding what the laws of nature are (and if you catch my on some days I might even say that one is deeper than the other. Which is deeper depends on the day. And perhaps the last time I’ve had a conversation with Scott Aaronson.) Certainly this is one of the reasons people get interested in the theory of computer science. I myself have very fond memories of reading “The Turing Omnibus” which introduced me at an early age to ideas of P versus NP, error correcting codes, the von Neuman architecture, the theory of computablity, etc. This excitement is as exciting as thinking about string theory, or supersymmetry, or solutions to Einstein’s equations. And it certainly, is a fundamental question, about our universe (I began my thesis at Berkeley with the sentence “Our generous universe comes equiped with the ability to compute.” Heh. Pretty cheesy, no?)
I think I could go on about this subject ad nauseum. So I’ll stop. And, of course, I’m more of an outsider than an insider here. This is my first FOCS. On the other hand, when one of the panelists asked how many had publised in Science or Nature, I was one of about four who got to raise their hand. And the paper even had some computer science (about universal quantum computers) in it! And remember, if you publish in Nature or Science, there is the possibility of being sucked into their press cabul and there will be articles about your work appearing simultaneously around the world’s newspapers.
A New Kind of Disclaimer
Cosma Shalizi has posted a review of Stephen Wolfram’s “A New Kind of Science.” It’s not often that you find a review which begins
Attention conservation notice: Once, I was one of the authors of a paper on cellular automata. Lawyers for Wolfram Research Inc. threatened to sue me, my co-authors and our employer, because one of our citations referred to a certain mathematical proof, and they claimed the existence of this proof was a trade secret of Wolfram Research. I am sorry to say that our employer knuckled under, and so did we, and we replaced that version of the paper with another, without the offending citation. I think my judgments on Wolfram and his works are accurate, but they’re not disinterested.
or a review that ends (well almost) with
This brings me to the core of what I dislike about Wolfram’s book. It is going to set the field back by years. On the one hand, scientists in other fields are going to think we’re all crackpots like him. On the other hand, we’re going to be deluged, again, with people who fall for this kind of nonsense. I expect to have to waste a lot of time in the next few years de-programming students who’ll have read A New Kind of Science before knowing any better.
Your Symmetry Broke My Quantum Computer?
An article in Scientific American (of all places….I stopped reading Scientific American when they started a section on science/pseudoscience. Sure I agree with them, but I don’t want to read a science magazine to read about how science is different from pseudoscience, I already know that. Plus they stopped the amateur science section and mathematical recreations section: really the two best reasons to read Scientific American in the good old days) on a mechanism for decoherence due to symmetry breaking.
Jeroen van den Brink and his colleagues at Leiden University in the Netherlands, however, suggest that even perfect isolation would not keep decoherence at bay. A process called spontaneous symmetry breaking will ruin the delicate state required for quantum computing. In the case of one proposed device based on superconducting quantum bits (qubits), they predict that this new source of decoherence would degrade the qubits after just a few seconds.
The paper in question, published in Physical Review Letters (and available as quant-ph/0408357cond-mat/0408357) presents an interesting mechanism for decoherence. What is most interesting about this decoherence mechanism is the rate they obtain for decoherence: [tex]$t_D={N h over k_B T}$[/tex], where N is the number of microscopic degress of freedom, and h, k_B, and T should be recognizable to every physicist 😉
What does this mean for quantum computers? Well the above might indicate that this is some fundamental limit for quantum computing (and in particular for superconducting implementations of quantum computers for which this result will hold). But I don’t think this is true. I’ll let the article explain why:
Not everyone agrees that the constraint of a few seconds is a serious obstacle for superconducting qubits. John Martinis of the University of California at Santa Barbara says that one second “is fine for us experimentalists, since I think other physics will limit us well before this timescale.” According to theorist Steven M. Girvin of Yale University, “if we could get a coherence time of one second for a superconducting qubit, that would mean that decoherence would probably not be a limitation at all.” That is because quantum error correction can overcome decoherence once the coherence time is long enough, Girvin argues. By running on batches of qubits that each last for only a second, a quantum computer as a whole could continue working indefinitely.