Science 2.0: Academic Reader

A new website of great interest to those of us punished under the crush of information in science, Academic Reader. Created by (I hope I have this right) Michael Nielsen, Peter Rhode, and Alexei Gilchrist the website is a way to manage your academic reading:

The Academic Reader is a new web site that makes it easier to keep track of your scientific reading. Rather than going to multiple websites every day to keep up, we pull all the sources together in a single location, so you can keep track easily. Sources include the preprint arXiv, the Physical Review, and Nature, and many new sources will be added in the months to come, including sources outside physics.

Good stuff, check it out!
And yes, Scirate.com is still down. The open archive protocol they are using is back up but has been changed in ways that may take a bit to fix is still down. Hopefully I can get the site running again before I drop off the edge of the internet and get married.

The Physics of "All-In"?

Combining two excellent topics, physics and poker, physics/0703122:

Universal statistical properties of poker tournaments
Authors: Clément Sire
We present a simple model of Texas hold’em poker tournaments which contains the two main aspects of the game: i. the minimal bet is the blind, which grows exponentially with time; ii. players have a finite probability to go “all-in”, hence betting all their chips. The distribution of the number of chips of players not yet eliminated (measured in units of its average) is found to be independent of time during most of the tournament, and reproduces accurately Internet poker tournaments data. This model makes the connection between poker tournaments and the persistence problem widely studied in physics, as well as some recent physical models of biological evolution or competing agents, and extreme value statistics which arises in many physical contexts.

Welcome to the Feast, Tums Provided

Over at Shtetl-Optimized, Scott goes a little ballistic on criticism he’s received over the wording of his and Umesh Vazirani’s letters to the Economist. (As a side note, it is kind of sad to see such a poorly written article in the Economist: I know for a fact that an early Economist article on Shor’s algorithm drew a lot of great people into the field of quantum computing.) Part of Scott’s issue is with “physicists” insisting on rigor when they themselves are “the headmasters of handwaving.” So when Scott says

Today it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time.

and he gets a lot of flack from physicists insisting that his statement might be interpreted as BQP not containing NP, he gets rather ticked off since those “headmasters of handwaving” themselves make all sorts of statements like this and don’t seem to mind. “Double standard!” shouts Da Optimizer!
Now one could take Scott’s rant and turn it into a great computer-science physics flamewar, but what fun would that be? And I’m a softy (or a Chinese restaurant placemat, depending on your perspective), so I’d like to take what Scott has said and turn it around. Therefore, let me declare the following Pontiffical edict:

Welcome to the headmasters of handwaving club, theoretical computer scientists! We’ve got a seat ready for you right here at our table, all full of delightful theorems and lemmas which you can eat….without having to give a proof of their validity. Our feasts our grand, our parties even wilder, and our nights filled with wonder and awe. Sure you may get a little indigestion, what with the unproven or even, dare I mention the word, wrong, theorems, tumbling around in your belly, but look at what you get in return! Did I mention the wild parties?

I guess in my mind, I actually think that computer science and physics have a lot more in common with each other than either side would ever dare admit. For example, researchers from both fields, it seems to me, can be placed (borrowing the terminology of physics only because of stuff in my past light-cone) neatly on a linear diagram from “experimentalist” to “theorist.” Just as “experimental computer scientists” complain to death about the lack of usefulness of the algorithms invented by “theoretical computer scientists”, “experimental physicists” spend vast hours complaining about the indigestability of the vast body of the “theoretical physics.” But every so often, in this tension, the theorist from both sides do something so remarkable and so practically important that the experimentalists get really excited and actually take the theory and put it to use. Similarly, theorists from both sides who don’t take heed of the experimental side of their field are, it seems to me, destined for obscurity. For example a theoretical physicist who ignores the fact that thier pet theory violates nearly every experiment ever performed, or a theoretical computer scientist who works with a model of computation so irrelevant to modern computation that no one notices (and no I don’t put the entire complexity zoo into this category: the reasons for studying the zoo go far beyond simply defining complexity classes so obscure as to be irrelevant…the beauty of the best complexity classes is exactly in their relevance to our real world computation questions.) both share a common destiny in the dustbin of irrelevant results.
So, rejoice, physicist and computer scientists! You masters of the twentieth century, makers of the grandest constructions in the world and in our minds, and find peace in fighting against your common rising enemy: those damn pesky biologists!

D-wave In D-news

Lots of blogging and press picking up on D-wave and Orion so I thought I’d collect a few here. The offical press release is here. I’d love to hear from anyone who has attended.
Scott Aaronson called me a Chinese restraraunt placemat in his The Orion Quantum Computer Anti-Hype FAQ (Update (3:17pm 2/13/07): Scott’s post now contains an update by the great Lawrence Ip, who now works for Google.) Doug Natelson, who gave an excellent talk here at UW a few weeks ago, poses three questions about the D-wave demo. Peter Rhode is every bit the skeptic and beats out Doug Natelson with four points. Ars-technica’s Chris Lee takes a shot at explaining adiabatic quantum computation and uses the word deathmatch here. You can find a bad quantum computing joke at the end of this blog post. I find this post amusing, if for nothing more than bringing politics into quantum computing. Coherence* remains the prettiest quantum computing website and has a choice Seth Lloyd comment “I’ll be a bit of a skeptic until I see what they have done. I’m happy these guys are doing it. But the proof of the pudding is in the eating.”
More mainstreamish media produces some truely incredible hype. One of my favorites is at physorg.com where we find the title a “New supercomputer to be unveiled” along with the choice gibberish “A Canadian firm is claiming to have taken a quantum leap in technology by producing a computer that can perform 64,000 calculations at once.” I flipped a coin 16 times today, can I get some venture capital? 🙂 Personally I like Gizmodo’s title: “D-Wave Quantum Computer to Span Multiple Universes Next Tuesday?” They also use the word sugerdaddy. If you want more reasons to be angry about hype or at bad journalism, go over to a wired gadget blog where you’ll find

There are certain classes of problems that can’t be solved with digital computers,” said Herb Martin, the firm’s CEO, over a decidedly-noisy digital cell phone. “Digital computers are good at running programs; quantum computers are good at handling massive sets of variables.”

Turing is certainly turning in his grave over that first sentence and, since Peter Shor is alive and well, I wonder if he is spinning today?
And don’t even get me started on this EETimes article. Choice:

Nondeterministic polynomial (NP) problems are the most difficult to solve on conventional computers because each variable adds yet another dimension to its possible solutions.

No, no, no! So many no’s I can’t even write it down. First of all NP problems include problems in P, so they definitely aren’t the most difficult to solve on a conventional computer. Second, the essentence of NP-complete problems is NOT just that you have an exponential search space. You’d think a Electrical Engineering rag would have taken some computer science courses? Then, of course EETimes only digs their grave deeper:

Quantum computers, on the other hand, can evaluate all possible solutions simultaneously and find the optimal solution, often in just a few clock cycles, thereby not only vastly speeding up the time taken to find the solution but also finding the most optimal result.

Okay, at that point I’ll admit I had to stop reading cus my brain was about to explode.
Oh, and whatever you do, don’t search for “first quantum computer” if you’ve ever performed a quantum computing experiment (that includes a lot of MIT Physics majors? Ack, is NMR quantum computation really quantum computation?) You might get a little miffed at all the years you spent in grad school doing what you thought were small quantum computer experiments.

International Very Nearly Linear Colider

Article on a press release about the details of the International Linear Colider. Errr…well sort of linear 😉 :

One unusual twist to the design, said Dr. Barish, is that the tunnels, rather than being laser straight through the ground, would curve with the Earth. “It isn’t obvious and it took us a while to demonstrate that we could actually design a machine that bends” he said, but that feature would allow the digging to stay within the same geologic layers and prevent liquid cryogenics from wanting to flow “downhill” from one part of the tunnel to another.

Quantum Engineering Sounds Fun

I missed this last year, but Yale has established a Institute for Nanoscience and Quantum Engineering. Who will be the first to file out a tax for with “Occupation: Quantum Engineer?”
Update: Oh, and I missed this one too. The University of Maryland, NIST, and the NSA have established the Joint Quantum Institute under the direction of Christopher J. Lobb and Carl J. Williams. It will be hard for west coast types to avoid jokes about what they are smoking at this institute, won’t it? 🙂

Talk Next Week

For local Seattlites the following shameless self promotion message 🙂 Next Tuesday at 4pm I’m giving a talk in the Physics department (C421 Physics/Astronomy Building) for the Condensed Matter and Atomic (CMA) Physics Seminar. The title of the talk is “When Physics and Computer Science Collide: A Cross Cultural Extravaganza” and the abstract is

In 1994 Peter Shor discovered that computers operating according to quantum principles could efficiently factor integers and hence break many modern cryptosystems. Since this time researchers from disciplines–physics, computer science, chemistry, and mathematics–have been engaged in building an entirely new discipline now known as quantum information science. Being a highly interdisciplinary endeavor, quantum information science requires not just mastery of physics or of computer science, but an ability to take insights from both fields across the cultural divide. In this talk I will discuss how physicists can contribute to the computer science side of quantum computing and how computer scientists can contribute to the physics side of quantum computing via a series of vignettes taken from research in my group here at UW.

Scirate Top Papers Jan 23-Feb 6

Well it’s been two weeks and I have had absolutely zero time to think about scirate. (It was midterm week!) So far 71 users have registered. Whoop! I have certainly slowed down the progress of science (what do you think this blog and scirate.com is for, after all?) So what were the highest scited papers in the time preiod Jan 23-Feb 6?
7 votes: quant-ph/0701173 [abs] Quantum walks on quotient graphs by Hari Krovi and Todd A. Brun
7 votes: quant-ph/0702031 [abs] A scheme for demonstration of fractional statistics of anyons in an exactly solvable model by Y.J. Han, R. Raussendorf, and L. M. Duan.
6 votes: quant-ph/0701165 [abs] A precise CNOT gate in the presence of large fabrication induced variations of the exchange interaction strength by M. J. Testolin, C. D. Hill, C. J. Wellard and L. C. L. Hollenberg
6 votes: quant-ph/0702020 [abs] How much of one-way computation is just thermodynamics? by Janet Anders, Damian Markham, Vlatko Vedral and Michal Hajdusek
5 votes: quant-ph/0701149 [abs], quant-ph/0702008 [abs]
The first papers with 7 votes, quant-ph/0701173, explores the role symmetries of a graph play in (discrete) quantum random walks on theses graphs. Of course whenever a physicist sees the word “symmetries” the immediate reaction should be “change basis”! Indeed for a proper choice of coin in the quantum random walk, the symmetries of the graph (the group of automorphisms) can be inherited by the unitary operator discribing the evolution of the quantum random walk. Whenever you have a unitary operator which is symmetric under a representation of a group, then, via Schur’s lemma, you know that this unitary operator will have a very nice form. Indeed if you decompose the representation into its irreducible irreps, then the unitary operator can only have support over the space of degeneracies of a given irreducible irrep. So, for the quantum random walks, this means that the walk will be confined to a particular subspace. In the setup considered in the paper this works out to be a walk on the quotient graph obtained from the original graph and some subgroup of the automorphism group. Very fun stuff. The authors then go on to analyze hitting times, worry mostly about the case of inifinite hitting times. They develop a criteria for spotting when the walk on the quotient time is not infinite (building on some prior work.) Okay, so what’s the next step? At what point can you identify when the walk will have fast hitting times would be nice. Also can you use the above arguments to spot when classical walks will be exponentially slower?
The second paper with 7 votes, quant-ph/0702031 is four pages, so it must be going to PRL 😉 The basic idea of this paper is fairly straightforward. The authors point out that it is easy to think about generating the ground state of Kitaev’s toric code using methods within experimental reach in ion traps and in optical lattices. This prepared state can then be used to “demonstrate” anyon statistics. In other words, instead of preparing a state in Kitaev’s toric code by cooling to the (degenerate) ground state, one can just prepare such a ground state using a simple quantum circuit, perform the braiding operations, and observe the effects of the (abelian) anyon statistics. Okay, so let me play the devil’s advocate here (something I don’t do well since I’m a coward.) Should we really claim that this is would constitute a “demonstration of fractional statistics of anyons”? My worry here is with the word “anyon” which, it seems, we usually restrict to things which are quasiparticle excitations. Of course this may just be a matter of taste, but I’d be curious to hear what others think. On a less subjective, and more concrete point, one interesting issue which was not addressed in the paper (at least on my admittedly fast first reading) was how errors will propogate in the scheme described for preparing the Kitaev state. Is it true that the preparation is in any way fault-tolerant? For example if you’re doing this in ions is it really possible with current two gate fidelities to demonstrate this in the 6 qubit setting? Interestnig stuff! How long before one of the experimental groups gets to exclaim “fractional statistics” after performing a few thousand experiments 🙂 ?
Okay, enough Scirate pimping. Let’s see what the next round of papers bring (how long before ideas hatched at QIP hit the presses? 🙂 )

Orion Into The Future

Update: 2/5/07 After some conversations, I’ve realized that there is, I think, a big algorithmic wrench in the endeavor described below. Due to this I’ve added some text to this post in italics to describe what I think this wrench is and why my skeptic meter is getting higher and higher.
I’ve been meaning to blog about this for a while, but haven’t gotten my lazy fingers to do any of the typing: D-wave has announced that they are going to be demoing their 16 qubit machine, called “Orion”, on Feb 13 and Feb 15. Cool pictures here, here, and a pretty dilution fridge here. D-Wave’s homepage has a pretty picture of the two locations their demonstrating at (well they are remotely demonstrating, I guess kind of hard to hall your lab down to California!)
Okay enough pictures! Onto the details. Sixteen qubits! ???
First the details. The system they set up claims to be an adiabatic quantum computer with sixteen qubits. It has a Hamiltonian which is made up of ising interactions plus a single qubit term which is along the direction of the ising interaction and a transverse field. The idea, of course, is to use this for adiabatic quantum algorithms. Finding the ground state of a two dimensional ising model with a magnetic field (where each of the field values of the ising terms of magnetic field terms is a problem instance) is NP-complete. So the idea, I guess is to start with all transverse magnetic field, and start in the ground state of this system, and then drag the system to the new groundstate of the Ising model on your sixteen qubits. Sounds fun. Now one thing to think about is 2 to the 16 is about 65 thousand. So a brute force search of the ground state will most certainly be feasible and beat out the D-wave computer. But who cares, right? I mean the proof of principle is more important! Indeed. And there are many things to think about from what we know about the system….some postivie and some negative.
Okay, since I’m generally a postivie guy, I’ll start out with the negatives. 😉 First of all, I understand, and certainly the D-wave people understand, that we do not think that quantum comptuers can solve NP-complete problems in polynomial time. We do believe however, that they can achieve polynomial speedups over the classical algorithm for brute force searching. So great, the question is whether this quadratic Grover type speedup is useful in practice. (Update: 2/5/07 Okay, so there appears to be a big problem with this, as far as I understand it. So if you have a Grover type search where there is one marked item, and you run a linear extrapolating adiabatic algoithm, you get no speedup. But if you slow down at just the right time, then you can get a quadratic speedup. Basically the idea is to slow down where the algorithm has it’s smallest gap. But now there is a problem: how do you know when to perform such a slowing down (a priori)? Further, how do you know what extrapolation to try? Finally, NP-complete does not mean only on solution, so what do you do with multiple solutions? As far as I know, for a system like the Ising model in a magnetic field in 2D, we have no idea how to realize the quadratic speedup because we cannot tell when to slow down! So color me extremely skeptical about using this sort of setup for quadratic speedups.) Now there are lots of interesting questions here. For example, you need to compare the quadratic speedup to the best known SAT solvers. Does Groverish search really help? (I could be wrong here, but my impression was that it is not known how to beat the best SAT solvers??? Help, experts correct me!) But the real problem I have is just one of PR. All this mention of NP-complete for their adiabatic quantum computer makes it sound distinctly like they are claiming that they can efficiently solve NP-complete problems. Don’t believe me? Well read the comments on the blog post linked above and you’ll see what I mean. Now, I know this is nitpicky, but I think when you’re dealing in a field which is highly highly highly susceptible to hype hype hype, you need to be extra careful. (Update 2/5/07: Indeed, given that we do not know how to even achieve this quadratic speedup for the type of setting this system, I don’t see why they can claim much relationship to quantum computing speedups at all?
Okay, so that’s kind of a low blow, just disagreeing with PR. I’m happy to admit it’s not something that is that important. The real question of course is: will this thing work? Is it a quantum computer? What fidelities do they achieve. Wowa..one thing at a time. (1) It will work. If it didn’t they’d be silly to demo it. And they are deadly serious (well deadly if you are an NP-complete problem 😉 )
Okay (2) Is it a quantum computer? First comment: adiabatic quantum computation (error free) is equivalent to the quantum circuit model (error free). So adiabatic quantum computers (error free) are quantum computers (error free) in the most traditional sense. If I were going to sell Orion to my investors, I’d sell this point over and over and over again. Of course, for those of you who read the parenthesis, you’ll know what I’m going to say next. We don’t know how to turn a system like the one they claim, with adaibatic controls only into a fault-tolerant machine. We have a few ideas (notably the work of Jordan et al which is very similar to the work of Kitaev and others have done…also early work by Childs et al.), but I don’t think we know how to build a fault-tolerant all-adiabatic quantum computer…yet! So should this matter? Well let me quote from a paper intro I’m writing right now

It is tempting to consider all physical systems as being information processing devices which manipulate through their dynamics the information labelled by the system’s different states. This view of computation, however, is severely naive because it leaves out consideration of whether it is practically possible to robustly store and fault-tolerantly manipulate the information encoded into the physical system in the presence of noise and imprecise control.

To me you do not have a computer until you have a fault-tolerant computer. So to the question of whether they have a quantum computer we can say: adiabatic quantum computers are quantum computers if we can figure out how to make them fault-tolerant.
Okay what about (3) the fidelities. This is where the real questions come in. At what temperature are they running their adiabatic quantum computer and how do these temperatures compare to the coupling strengths they have. More importantly, however, how do these temperatures compare to the gap in the adiabatic quantum algorithm (this will be smaller than the coupling terms!) This latter problem is one of the big problems for error correction in adiabatic quantum systems. Since the gaps for adiabatic quantum algorithms scale inversely as a polynomial in the problem size, you need to have your temperature scale in a similar way. Now of course, this isn’t totally true, becuase it is possible for a temperature bigger than the gap to be USEFUL in the adiabatic quantum algorithm (a sort of hybrid adiabatic-anealing computer.) However our understanding of this type of algorithm is far from complete and understanding how it, say, affects the quantum circuit simulations of quantum computers on an adiabatic machine is not, as far as I know, known.
Another thing to worry about in fidelities are the couplings themselves. How well are they controlled? If I want a certain energy coupling in my adiabatic quantum algorithm, how certain are you that this coupling is the correct one (say Ising ZZ) or is the correct strength. If that control isn’t strong enough you can cause a phase transition from the ground state you want to one you don’t want. Very bad.
Finally you might worry about the fidelities of two other components, the preparation and the readout.
Of course all these questions about fidelity are ones which hopefully dwave will settle by either publishing about their system, or convincing us all that they are doing legit things by building quantum computers that behave as expected (and eventually solve a SAT problem the NSA can’t solve…assuming the NSA doesn’t have a quantum computer already, of course! See there I go, mentioning SAT instances in the same sentences as quantum computation when I really should have mentioned factoring!)
Finally let me just say that I think the fact that D-wave is putting themselves out there and trying to push things is great. I worry about if they fail what this does for future quantum computing endeavors, but I love underdogs, so I have great sympathy for trying. (Update 2/5/07: But what should we make of the fact that it seems to me the algorithmic case for their method seems, at best, to be in grave doubt? For me, this is dubious territory to tread. While I won’t cry doom, I will cry that the case is very shakey.) I also love the fact that they are going for breadth versus depth. I want more qubits damnit, and if a crazy Canadian company is going to be the one to help me get those qubits, then I’m as happy as can be! (Update 2/5/07: I still like the fact that they are going with a lot of very very noisy qubits. In my mind this makes the endeavor like NMR+. Whether this plus is big enough to fashion a real quantum computer is the billion dollar question.)