QIP 2007

Via Michael Nielsen comes the first announcement of QIP 2006

Dear Colleagues,
This is the first announcement for the tenth QIP (Quantum Information
Processing) Workshop, to be held in Brisbane, Australia, from January 30
through February 3, 2007.
The deadline for abstract submission for contributed talks (long and
short) and for posters is 4 November, 2006.
The deadline for early bird registration is 24 November, 2006.
Full details are available at the workshop website:
http://qipworkshop.org/
Hope to see you in Brisbane in 2007!
Regards,
Michael Nielsen

QIP is the premier quantum information science conference for theorists (why it is called a workshop is beyond me!) I’m teaching winter term, but I hope to be able to take the week to attend QIP.
As a dedicated consipiracy theorist, I think it is interesting to note that QIP has been moving towards progressively warmer climates. I mean the QIPs in the early 2000s were all totally freezing (-12 is cold in both F and C) or accompanied by major storms. I suspect this is a sign of the maturation of the field. On the other hand, Brisbane was the site of a horrible massive huricane last year (crap I hope I didn’t just jinx Brisvegas.)

Swimming in a Sea of Qubits

Yesterday I was at Bell Labs for a one day meeting on quantum computing. Bell Where, the young ones ask? You know, the place where the transistor was invented! (Can you name the three who won the Nobel prize in physics for the invention of the transistor? How many people that you meet walking down the street can name any of these three?)
Amazingly this meeting was covered by local media: see here. Any investors might be interested in the last few paragraphs of the article:

At least one audience member was impatient.
Jan Andrew Buck heads Princeton Group International, which backs biotech ventures. He said he is itching for a bare-bones quantum computer for plotting complicated routes and schedules.
“I think I can get a squeaky, scratchy quantum computer to market in two or three years,” Buck said. All he needs, he said, are investors with deep pockets and short deadlines.

Now I consider myself an optimist, but I think Jan Andrew Buck has just out optimismed even my cheery outlook.
Back to the topic at hand, the meeting was fun! The first two talks, by David DiVincenzo and Isaac Chuang, were interesting in that they both made some arguments about whether the “sea of qubits” type architecture for a quantum computer is really feasible. Loosely, the idea of a sea of qubits is to have, say, a two dimensional dense grid of qubits which you can control with nearest neighbor interactions. One difficulty with this approach is that if you have a dense sea of qubits, it is hard to imagine how to get all of the elements you need to control these qubits from a classical controller outside of the quantum computer to each individual qubit. This is particulary worrisome for some solid state qubits, where high density is often needed in order to get controllable strong two qubit interactions (like say in some quantum dot approaches), but applies to many other types of implementations as well. David DiVincenzo talked about work he performed with Barbara Terhal and Krysta Svore on threshold for two dimensional spatial layouts (see quant-ph/0604090.) Because the cost of this spatial layout was not huge, along with his work with a particular implementation of a superconducting qubit at IBM, David reconsidered, in his talk, whether the sea of qubit was really that bad of a problem. He concluded by discussing how perhaps techniques developed in making three dimensional circuitry could be used to overcome the sea of qubits problem. Ike, on the other hand, talked about the issues of designing a quantum computing made out of ions, where the issue of getting your classical control may not be as severe (other talks focused on the MEMS mirror arrays which will be used to control, in parallel, many thousands of ion trap qubits.) Ike was one of the original people to point out the difficulties in the “sea of qubits” ideas, and I can’t help but think the reason he started working on ion traps and not solid state implementations was in some part motivated by this problem.
To me, the debate about what the architecture for a future quantum computer will look like is very intersting. Mostly because this debate has to do, I think, with quantum computing people taking very seriously what “scalable” means. I personally can’t stand the word “scalable.” Why? Well mostly because it is put in front of every single proposal in which the authors can reasonably imagine some far fetched way to scale their proposed quantum system up. Call me jaded. But what is fun to watch is that, now that there is serious discussion of many qubit quantum computers, the real difficulties of scalbility are beginning to emerge. Scalability is about money. Scalability is about ease. Scalability is about architecture. Which physical implementation will scale up to our future quantum computer and what will the architecture of this computer look like? Depending on the day you ask me you’ll get a different answer. Which, I suppose, is one of the reasons why I remain but a lowly theorist…too scared to jump on the bandwagon I trust the most.

Shine On You Crazy Diamond

Syd Barrett, co-founder of Pink Floyd, dead at age 60.

Remember when you were young,
You shone like the sun.
Shine on you crazy diamond.
Now there’s a look in your eyes,
Like black holes in the sky.
Shine on you crazy diamond.
You were caught on the crossfire
Of childhood and stardom,
Blown on the steel breeze.
Come on you target for faraway laughter,
Come on you stranger, you legend, you martyr, and shine!
You reached for the secret too soon,
You cried for the moon.
Shine on you crazy diamond.
Threatened by shadows at night,
And exposed in the light.
Shine on you crazy diamond.
Well you wore out your welcome
With random precision,
Rode on the steel breeze.
Come on you raver, you seer of visions,
Come on you painter, you piper, you prisoner, and shine!

Translate to Classical, Then Laugh

If you ever let reviewers get under your skin, your going to waste a lot of your life with way too much grief. So my approach is mostly to first laugh, then send an email to my collaborators expressing outrage (vent), and then laugh again. Sometimes the reviewers comments are just priceless. For example, a recent review of a paper I was involved in had the following line:

However, the mere fact that the…transform can be applied efficiently is not a surprise – usually this is the case with quantum transforms, if one looks into the representation theory close enough.

Why is this funny? Well when I translate this over to classical algorithms I get the following sentence:

However, the mere fact that the circuit can be applied efficiently is not a surprise – usually this is the case with classical circuits, if one looks into the combinatorics close enough.

Ha! Now that’s funny.

Back From Outerspace

Posting has been low because I’ve been (1) camping on a ferry going through the inside passage in Alaska (where I spent my time doing lots of perturbation theory), (2) camping south of Mt. Rainier, and (3) celebrating the 4th of July.
In other personal news, I’m in Nature! Okay, well not quite, but this website was somehow listed on a list of fifty popular science blogs. Like all such listing this certainly is Macbeth (you know, “full of sound and furtyfury, signifying nothing.”)
Oh, and as of July 1, 2006, I am no longer a “Principal Research Scientist” but instead am a “Research Assistant Professor” in the department of computer science and engineering here are UW. I think this means that Scott can no longer officially use me as his physicist punching bag (although I must say I am certainly happy to oblige)? This is, of course, good news. The only drawback that I know of right now is that when I fill out my tax forms next year I will no longer to fill out my occupation as simply “Scientist.”
Next week I’m off to Bell Labs where I’ll be talking at a meeting called “Quantum Information Meets Nanotechnology” so I promise to have real quantum content next week.

The Computer Scientist and the Abominable Approximation

[Over at Shtetl-Optimized, Scott Aaronson has decided to take a shot at physicists with his cute little fable The Physicists and the Wagon. Scott specifically called me out, so I feel obliged to respond. Of course I would have liked to respond with a fable directly related to Scott’s main point, but that is damn hard, so instead I present to you a fable which may, or may not bear some relationship with Scott’s post and whose moral is mostly to harp on computer scientists not trusting physicists. If confronted I will immediately deny that I wrote the fable below or that the fable represents anything coming close to my true feelings on the subject.]
Once upon a time and a very good time it was there was a Physicist coming down along the road and this Physicist that was coming down along the road met a nicens little boy named Computer Science. [ed note: bonus points if you recognize this mangled famous opening line.]
Physicist, being ever interested in learning new things, began to have a conversation with the nicens little boy named Computer Science. Computer Scientist, it turns out, he had all of these really fun toys which were all labeled by combinations of letters and numbers (like P and NP and BPP and AC_0.) Physicist was quite confused by all of these letters. What did they mean? Why were there so many? It almost sounded like his friend Old-School Biology to him [ed note: if you’re going to hammer CS, why not hammer other fields equally?]
Computer Science patiently explained to Physicist what all of these strange letter combinations were and the accompaning beasts which they described, but really the physicist only listened to a bit of what he said. He really liked the complexity classes P and BPP, and understood the deep dark hatred everyone should have for complexity classes like NP-complete or NEXP. He was a bit confused by PSPACE and wondered, in a bout of extraillusionary intelligence, whether he should tell Computer Science about Special Relativity. But he really didn’t understand Computer Scientist’s obsession with the endless array of complexity classes.
After blabbering on for quite a while, Computer Scientist noticed that Physicst was nodding off. “Why aren’t you paying attention, Mr. Physicist?”
“Well, you keep talking about this endless array of complexity classes, and while it all sounds quite fascinating, I’m wondering if you could get to the point?”
Computer Scientist then launched into a diatribe about “proving” all sorts of different things (which the computer scientist insisted on refering to as “theorems.” This made these things seem big and important, and secure, like how he felt when he was safe in his fenced suburban home.) Physicist couldn’t take much of this diatribe, so he interupted, “But if you really feel strongly that these complexity classes are different, but you can’t prove it, why don’t you just accept it and move on? I mean you’ve got ample experience telling you that P does not equal NP, right?”
At this point Computer Scientist’s head bulged, his veins began to stand out from his neck, and he emmitted a long loud shriek which sounded a lot like fingernails being dragged across a chalkboard…and the nails breaking. Computer Scientist, however, had long mastered the art of immediate Zen meditation, and so began chanting to himself and calmed himself down by dumping the core of his memory and then rebooting.
“But, Physicist, if you can’t prove something, then how do you know it is true? Won’t you spend all of your life worrying about whether it is true or not?”
“Let me tell you a story” said Physicist, and therein he launched into a little sub-fable of his own:

Once upon a time, there was a clan known as the statistical physicists. These statistical physicists studied all kinds of interesting physical systems, their distinction being that they were very good with large numbers of interacting systems. They were particularly good at describing systems which changed their appearance (which physicists like to call “their phase”.) In other words they were particularly good at describing phase transitions.
Now some of members of the clan of computer scientists were smart enough to learn a little about the theory of phase transitions. Some of these members of the computer science clan, like almost every rational being, were particularly enamored with computational problems which were NP-complete. When they investigated these problems, together with members of the statitical physics clan, they discovered that instances of these NP-complete problems came in different phases and that, just like the phases they studied in physics, they could study the phase transitions between these different instances of the problem. Now this was fun! And so, because this worked for a few instances of the different NP-complete problems, these crazy scientists conjectured that NP-complete problems were distinguished from other computational problems by the existence of different phases and a phase transition. Indeed they were even bolder and claimed that the hard problems, the ones for which no known polynomial time algorithm would suceed, were those problems near this phase transition.
Of course this was in some ways both profound and silly. Silly because the scientist could not prove that this was true. Profound because the scienists had discovered that a property of physical systems could be mapped to a computational problem in an interesting, and possibly fruitful manner.
But scientists are a skeptical group. So no conjecture will last long without being challenged. So one scientist did. He examined the integer partitioning problem. The integer partitioning problem is, given a set of of k integers between 1 and N decide whether there is a subset of these k integers whose sum is equal to the sum of the elements not in this subset. This problem is a beast of the NP-complete kind. So according to what others were boldly conjecturing, this problem should have had a phase transtion. But, this scientist claimed, in an article entitled “The Use and Abuse of Statistical Mechanics in Computational Complexity,” that this problem did not exibit a phase transition. It was thus claimed to be a counter example to the bold conjecture!
But not everyone was sold on this conterargument. Indeed, numerical results immediately began to dispute this counterargument. And then a memeber of the statistical physics crowd, Stephan Mertens, approached the problem like a physicist. He showed how to approach the number partitioning problem like a problem in statistical mechanics. He then showed that there was a phase transition in this problem and explained how the effects of working with finite numbers of numbers effected the properties of this transition. Now, Mertens approached this problem with all the tools and lore of statistical physics. These tools involved many things that physicists were confortable with, including, methods which physicists love known as approximations.
So it might seem that the story would end here. Mertens had shown that there was a phase transition in the problem, identified where and how this phase transition occured, and triumphantly explained scaling effects near where the phase transition occured. But, no! Why? Well because while Merten had “shown” these results, he had done so using a approximations which physicists were confortable with, but which were not “proven!”
So physicists reading about this would probably have been happy. But not so, for those who work in the clan of computer science! There the approximation was considered an abomination, something so disgusting and foul that it should be banished to the far ends of the earth (no one apparently having told the computer science clan that the world was not flat 😉 )
Thus a brave group of computer scientists/mathematicians/mathematical physicists decided to see if what these crazy physicists were saying with their abominable approximation was true. And by true, they meant provably true. So Borgs, Chayes and Pittel (the brave group), in a beautiful forty page paper, investigated and proved results about the phase transition in the integer partitioning problem. And what did they discover? They discovered, to their surprise, that Merten’s results, even using his abominable approximation, were correct! While he had made an abominable approximation, Merten’s result agreed with the exact result (in appropriate infinite instance size limits.) Those damn physicists had invoked an unproven and abominable approximation and still gotten the right answer! (As a side note the work of BCP also pens down the finite size effects in a rigorous manner.)

“Wake up Computer Scientist!”
“Oh, I’ve been awake. I just like to rest my eyes when I’m listening to stories. It helps me visualize the characters involved. For the physicists I was imagining a moocow.” [ed note: this joke only makes sense if you know where the first line of this post comes from. This is an example of a obsoke: a joke so obscure that prehaps only the author of the joke understands why it is funny.]
“So what do you think about my story?”
“I think that sometimes, even people like you Physicist, can do produce profound results even though you can’t prove why they work. It also seems that you don’t care so much if something is proven as much as if it agrees with your experience. I could never live that way, of course. It seems, so, so…uncertain!”
“Which reminds me, Computer Scientist, have you ever heard of this thing called quantum mechanics?”
“No, what is that? Some kind of car repair shop?”
“Heh. Let’s go to the pub and get some beer and then, boy do I have a story for you….”

Eprintweb

For quite a while now I’ve been looking for a good way to deal with the fact that I almost daily look at the arXiv for new papers, but have the bad habit of forgeting about what I want to read from a day’s listings. I’ve tried setting up a blog where the RSS feed from the ArXiv is posted to the blog and then I can delete the entries I don’t want to read. This is okay, but not optimal.
Today I got an email about the IOP’s Eprintweb website. This site acts as a simple front end to the arXiv. What is nice, however, is that when you click on the abstract of a paper, you can then save this paper to your own personalized page of preprints. You can also set up email alerts for keywords or authors. Okay so this is a start! What I really want, though, is not just the ability to mark papers, but to place these marked papers in folders, and to be able to assign my own personalized comments to these papers (including things like a radio box with which I can assign my desire to read the paper. And then the ability to sort the papers by this rating.) Even greater if it would store a local copy on my computer of the pdf. That way even when I’m not attached to the internet the entire edifice of my personalized eprint archive is available for my use. Yep, I’m pretty picky. But maybe someday!
Update: Jake points to Citeulike which satisfies most of the requirements I point out. I’m not sure I’m hip on the public nature of the posting however (wouldn’t want to offend people by not reading their papers!)

Turning Hilbert Spaces Discrete

Over at Information Processing, Steve Hsu has a post about a recent paper (with coauthors R. Buniy and A. Zee), hep-th/0606062: “Discreteness and the origin of probability in quantum mechanics” which is probably of interest to quant-ph exclusive readers (it was cross posted to quant-ph today.)
The basic idea of the paper is as follows. In the many-world’s interpretation of quantum theory a major challenge is to derive Born’s probability rule (which, of course, Born got wrong the first time!) One way that this can be achieved (well, sort of!) is as follows. Consider a world in which we perpare a particular quantum state and measure it in a particular basis. Now repeat this process for the same state and the same measurement. Do this an infinite number of times (more on this later…you know how I hate infinity.) Born’s probability rule predicts that the probability of a particular sequence of measurement outcomes is given by [tex]$|langle s_1|psirangle langle s_2|psirangle cdots|^2$[/tex]. In the limit of an infinite number of measurements, the typical fractions of the different outcomes dominates this expression (i.e. the terms which recover Born’s rule.) In other words, the sequences with fractions which dont’ satisfy Born’s rule have vanishing norm. So what do you do? Suppose you simply exclude these “maverick” worlds from the theory. In other words you exclude from the accessible states those with vanishing norm. (There are a bunch of things which bother me about this derivation, but lets just go with it!)
Now the problem addressed in the paper is that of course the limit of an infinite number of experiments is pretty crazy. And if you cut off the number of experiments, then you run into the problem that the maverik worlds have small, but non-zero norm. So why should you exclude them? What is suggested here is that you should exclude them because the Hilbert space of quantum theory isn’t quite right (the authors have arguments about this discreteness coming from arguments concerning gravity.) Instead of our normal Hilbert space, the argument is that a discrete set of vectors from the Hilbert space are all that are physically possible. One picture you might have is that of a bunch of nearly equally spaced vectors distributed over the Hilbert space and unitary evolution for a time T is followed by a “snapping” to the nearest of these vectors. How does this discretized Hilbert space (isn’t calling it a discrete Hilbert space an oxymoron?) fix the problem of a finite number of measurements? Well you now have a minimal norm on your Hilbert space. States which are two close together appear as one. In other words you can exclude states which have norms which are too small. So, if you put some discretation on your Hilbert space you will recover, almost, Born’s rule in the same manner as the infinite limit arguement worked. An interesting argument, no?
One interesting question I’ve pondered before is how to make a discretized version of Hilbert space. Take, for example a qubit. Now you can, as was suggested above, just choose a finite set of vectors and then proceed. But what if you want to avoid the “snapping” to this grid of vectors? Well you might then think that another way to proceed is to have a finite set of vectors and then to allow unitary transforms only between these vectors. But when you do this, what unitary evolution you can apply depends on the vector you are applying it to. Nothing particularly wrong about that, but seems like it might lead to a form of quantum theory with some strange nonlinearities which might allow you to solve hard computational problems (and thus send Scott Aaronson’s mind spinning!) So what if you don’t want to mess with this linear structure. Well then you’re going to have to require that the discrete set of vectors are transformed into each other by representations of some finite group. Does this really limit us?
Well it certainly does! Take for instance, our good friend the qubit. An interesting question to ask is what are the finite subgroups of the the special unitary transforms on a qubit, SU(2). It turns out that there aren’t very many of these finite subgroups. One finite subgroup is simply the cyclic group of order n. Since we are dealing with SU(2) and not SO(3), this is just the set of rotations in SU(2) about a fixed axis by angle [tex]$frac{n}{4 pi}$[/tex]. Another subgoup is the dihedral group of order n which is just like the cyclic group, but now you add an inversion which corresponds to flipping the equator where the cyclic group states are cycling. Finally there are the binary tetrahedral group, the binary octahedral group, and the binary icosahedral group which are SU(2) versions of the symmetries of the correspondingly named platonic solids. And thats it! Those are all the finite subgroups of SU(2)! (There is a very cool relationship between these subgroups and ADE Dynkin diagrams. For more info see John Baez’s week 230) So if we require that we only have a discrete set of states and that the group structure of unitary operations be maintained, this limits us in ways that are very drastic (i.e. that conflict with existing experiments.)
So if one is going to take a discrete set of vectors and use only them for quantum theory, it seems that you have to modify the theory in a way which doesn’t preserve the unitary evolution. Can one do this without drastic consequences? I think that is an interesting question.