Kielpinski Spaces

Reading Light by M. John Harrison, I ran across a neat reference to quantum computing.
So at the begining of the book a main character kills a guy and returns to a lab (of course, doesn’t everyone go to lab after they’ve murdered someone?) where they are working with “q-bits” [sic]. Then this choice line (p.6):

“We can slow down the rate at which the q-bits pick up phase. We’re actually doing better than Kielpinski there – I’ve had factors of four and up this week.”

Despite the Cornell spelling (it’s so cold in Cornell that David Mermin loses the “u”?), cool! Hopefully, many of you will recognize the reference to Dave Kielpinski who did some amazing ion trap quantum computing experiments at NIST and is now at Griffith in Australia. Okay cool, a reference to a real quantum computing researcher.
But it gets better! A few lines later:

Somewhere off in its parallel mazes, the Beowulf system begam modelling the decoherence-free subspaces – the Kielpinski space – of an ion pair…

Not only q-bits [sic] but also decoherence-free subspaces (no subsystems, alas)! And indeed this is a direct reference to papers Kielpinski was involved in: “A decoherence-free quantum memory using trapped ions,” D. Kielpinski, V. Meyer, M.A. Rowe, C.A. Sackett, W.M. Itano, C. Monroe, and D.J. Wineland, Science 291, 1013 (2001) and “Architecture for a large-scale ion-trap quantum computer,” D. Kielpinski, C.R. Monroe, and D.J. Wineland, Nature 417, 709 (2002). That former paper saved my butt during my thesis defense. An AMO physicist, about half way through my defense, said something like “Well this theory is all good, but what about in the real world.” My next slide was a plot yoinked from that first paper showing the first experiment which demonstrated slower decoherence in a decoherence-free subspace under ambient conditions.
And, dude, from now on I am totally calling the DFSs in ion traps “Kielpinski spaces.”

FOCS 2007, 4 Quantum

The list of FOCS 2007 papers has been posted (via Geomblog). Quantum papers:

Andrew M. Childs, Leonard J. Schulman and Umesh V. Vazirani.
Quantum algorithms for hidden nonlinear structures
[arXiv:0705.2784]
Oded Regev and Ben Toner.
Simulating Quantum Correlations with Finite Communication
[not available online yet 🙁 ]
Dorit Aharonov, Daniel Gottesman, Sandy Irani and Julia Kempe.
The power of quantum systems on a line
[arXiv:0705.4077]
Andris Ambainis, Andrew Childs, Ben Reichardt, Robert Spalek and Shengyu
Zhang.
Any AND-OR formula of size N can be evaluated in time N^{1/2+o(1)} on a quantum computer
[arXiv:0704.3628 and arXiv:quant-ph/0703015]

SciRate.com Comments

Okay, so yeah, yeah, I’m going to blabber on about SciRate.com again. So if your one of those grumps who think that Digg is what you do in the dirt and MySpace refers to your own personsal space, you can stop reading now.
For those of you who haven’t checked out SciRate.com recently, you might not know that there is now a comment feature where you can comment on each paper posted. While there have only been a few comments so far, I think most readers of this blog (all three of you!) would be interested in each of these comments. So here is a digest of some recent comments:

User tobiasosborne writes a comment about 0705.0556, “Random Unitaries Give Quantum Expanders” by M. B. Hastings giving us a good expanded synopsis of the signfigance of this paper.
User Steven reads 0706.1966 so that we don’t have to.
User matt.hastings points us to 0706.3612, “Chiral entanglement in triangular lattice models” by D. I. Tsomokos, J. J. Garcia-Ripoll, and J. K. Pachos, which describes a model which just might be a chiral spin liquid.
User dabacon (now who could that be?) comments on the title of 0707.0021, “Fault Tolerant Adiabatic Quantum Computation” by Daniel A. Lidar, which he finds reason to object to. Update: And a conversation ensues.

Pretty cool, eh? Well I think it is cool, but then again, I think science benefits from open discussion (and I wrote the damn website myself, heh.)

Shiny Tiny Ion Traps Are Bad?

So suppose you are thinking of building an ion trap quantum computer. Since putting all your ions in one trapping region is just silly when you get to large numbers of ions, you are led to think about using multiple ion traping regions. Then you are led to two possible design choices: whether to move the ions between these regions or whether you can couple the ions to flying qubits like photons which can then couple back to ions. If you’re going to do the former, you are quickly led to the notion that it is better to have the qubits packed pretty densely, both for reasons of not wanting a gigantic quantum computer, but also for reducing the amount of moving you’ve got to do. So making small traps is of great import and a lot of groups have been doing just that, with trap sizes a few tens of microns in size.
But, uhoh, what happens if you make your traps smaller? Well one effect is that the heating rate of your ions goes up. This heating is believed to come from fluctuating patch potentials on the trap surface. This noise scales roughly like one over the size of your trap to the fourth power, and so you can imagine that going to small traps might make for some rough heating. And indeed for the small traps you’d like to use for ion trap quantum computers, this heating becomes pretty unbeareble. That is at room temperature it becomes unbareable. But there is evidence that this heating effect is thermally activated which is a fancy way of saying that if you cool your trap down this heating starts to go away. Indeed, Chris Monroes’ group has cooled a needle trap (think two ball point pens pointing at each other) to 150 Kelvin and seen a surpression in this heating rate. Indeed, one of my favorite lines at FTQC II was when Chris Monroe said of this [tex]${1 /d^4}$[/tex] heating that “we’ll get rid of it before we understand it.”
And indeed, today on the archive there is even more good news for getting rid of this noise. In 0706.3763 the ion trapping group of Isaac Chaung lab at MIT describes experiments with a surface electrode trap which has been cooled down to near 4 Kelvin:

0706.3763
Title: Suppression of Heating Rates in Cryogenic Surface-Electrode Ion Traps
Authors: Jaroslaw Labaziewicz, Yufei Ge, Paul Antohi, David Leibrandt, Kenneth R. Brown, Isaac L. Chuang
Dense arrays of trapped ions provide a promising avenue towards large scale quantum information processing. However, the required miniaturization of ion traps is currently limited by sharply increasing decoherence at sub-100 um ion-electrode distances. We characterize heating rates in cryogenically cooled surface-electrode traps, with characteristic sizes ranging from 75 um to 150 um. At 6 K, the measured rates are suppressed by 7 orders of magnitude below room temperature values, and are two orders of magnitude lower than previously published data. The observed heating depends strongly on thermal processing of the trap, which suggests further improvements are possible.

Yep they get 7 orders of magnitude slower heating in their trap by cooling from room temperature to 6 Kelvin (which, I guess, is near 4 Kelvin 🙂 ) And two orders of magnitude lower than for previously ion traps. Interestingly the effect depends on how the trap is made. Shiny traps are bad, apparently 🙂
So is the future of ion trap quantum computing to work at 4 Kelvin? Or might understanding the properties of the annealing of the trap which lead to [tex]$1/d^4$[/tex] heating suppresion give clues as to how to design room temperature traps without this heating present?

Cook-Levin for Physicists

It’s a late night, and you’re getting tired. You’ve just written down a very complicated and large Hamiltonian. Sure it’s a classical Hamiltonian, but it’s still rather beastly. A big beast which takes as input the configuration of a bunch of subsystems and outputs and energy of the system when it is in that configuration. And like all respectable Hamiltonians it’s got some of the standard features. It is local in nature and, this being the good old classical world, if someone gives you a classical configuration you can go through and calculate all of these local contributions to the energy and figure out the total energy. Your Hamiltonian isn’t very uniform, so you really do have to go through and calculate what each of these terms are. A long calculation, but not one you’re much worried about carrying out. And really it isn’t just a Hamiltonian for one system, but for a system of growing and growing size. But really this doesn’t bother you much as doing things like calculating the energy of a configuration doesn’t seem to grow too fast as a function of the system size.
On the other hand, what you’re most interested in, what you’re doing up so late at night, is that you really want to find the ground state configuration of this Hamiltonian (or if there are multiple ground states, at least one of them.) Actually since it is late at night you’ll settle for even less. You know that for each term in your Hamiltonian which couples a few subsystems, you can go through each local configuration on these few subsystems and calculate the possible energies that comes from this term. From this list we can write down the minimal value of the energy coming from this term. And because it’s late at night, what you really want to know is not necessarily the ground state, but instead you want to know whether the ground state energy of the system is the sum of all of these minimum energies. Can each separate term in you energy take on its minimal value?
Well it’s late and you’ve downed more coffee than a truck driving mathematician, but even in this state you realize that this isn’t going to be an easy problem. But wait! Who is that sitting across the room from you? You didn’t notice him come in. Strange. What? He’s trying to say something to you. A bunch of numbers he’s reciting. Hmm, strange. Those numbers seem to be related to configurations of the local subsystems in your Hamiltonian. Strange. Okay, well you are a physicist, so your halucinations are not usually what other people would call normal. Well you’d better write the configuration down before your halucination ends. Okay, now you’ve got a configuration for your system. What to do with it? What to do with it? Oh, obvious! Calculate its energy. Cool. Now what? What about calculating whether this energy minimizes each individual term in your Hamiltonian? Hmm, that’s a simple computation. So do it! What? Why yes, yes it is a configuration which minimizes each term in your Hamiltonian! Wonder of wonder this halucination has done all your hard work for you and given you a ground state which minimizes each individual term in your Hamiltonian. And while you know you are halucinating, you are certain that the calculation is true, because even during periods of mental inebriation, you’re calculating abilities are first rate and since you checked each that each term individually attained its minimal value. Holy moly, what an awesome halucination. Instead of having to try every single configuration (a task which scaled elephantly, err exponentially it is late you know), or even being smart about doing the calculation in a clever fashion, this halucination has provided you with a witness to the fact that your energy can minimize each term individually and all you have to do is do the individual local energy calculation. You need to start having more dreams like this.
A few weeks later.
You’ve written down your Hamiltonian and know that it has a ground state which minimizes each term in the Hamiltonian. Fame and fortunate has, strangely, however not descended upon you for this great achievement. However, for each Hamiltonian you write down it seems that you can put yourself in this halucinatory state and then low and behold a magical proof appears about whether your system has a configuration which minimizes each term individually. Sometimes, of course there isn’t such a configuration and your halucinations produce configurations which you quickly calculated don’t have minimal energies for each term. You’re never fooled and always right. Life ain’t that bad.
A few months later.
Why don’t these physicists recognize your genius? Sure it was a halucination which led you to your ansatzes, but I heard that Bethe guy was on something when he came up with his ansatz too. Uh oh, time to apply for faculty jobs. This might get ugly.
A few years later.
Ouch. Well that physics thing didn’t pan out. But this new computer industry is kind of interesting. Just the other day you were fiddling around with an interesting little simulator. This thing simulated what is called a Turing machine. Now why the computer scientists are so obsessed with this Turing fellow and his simple machine, you certainly can’t fathom. But it’s a cool little gadget. Basically you’ve got yourself a big long tape where you can write symbols into little cells and a machine which moves from cell to cell, writes into cells, and changes its internal configuration in a manner depending on what the current cell reads and the internal configuration of the machine. There is this big old table that you can write down all the information about the Turing machine in a nice small manner. It says, if you read symbol X and your internal state is Y, then write symbol S on the tape, move according to rule R, and change your internal state to S. Cool little gadgets these Turing thingamabobers.
So now that you’re employed in the computer industry you get to spend you days programming these little Turing thingamabobers. Hours of your day spent toiling over the design of the table. Yes, it is a painful existence. But the pay is better than in physics, so that is some consolidation. And your boss doesn’t make you work too many hours. On the other hand he wants you to design Turing thingamagigs that do their dastardly calculations pretty quickly. Like just recently you were designing the Turing machine for Jet Blue’s airline and employee scheduling machine and you had a great design, but your boss complained that it worked fine for a few airplanes in the fleet, but as the number of airplanes and employees grew, your algorithm scaled fairly poorly. Okay, well horribly poorly, like it would take the age of the universe to finish calculating for today’s airline fleet. And so you got chewed out by your boss who yelled the word “polynomial” at you so many times that you finally got the point that he wanted Turing machines that finished in a time polynomial in the number of airplanes and employees. What a strange concept to see those polynomials which were your great friends when calculating trajectories, show up between swear words in your boss’ family unfriendly diatribe.
So now you spend hours in your office each day, writing these Turing tables, and watching your Turing machines calculate away. Ticking away the moments that make a Turing day, you fritter and waste the hours in an offloaded way. Mostly you spend your time with the machines which have a polynomial running time, the ones your boss really likes. I mean you really don’t want to tick him off and have him shout exponential swear words at you. Now for a given problem your boss gives you, you come up with a little Turing machine which says, this problem has the answer “yes” or “no.” Your boss is big on these types of answers, you’re not sure quite why but perhaps it has something to do with his childhood spent communicating in a secret binary code with the surogate robotic sister his parents had designed for him after they learned they couldn’t have any more children.
Even more years later
Well the computer job has been real solid. You can retire now since your Turing machines helped bring peace between Facebook and MySpace. But lately you find your mind wandering. You’ve been thinking about these nice polynomial running time Turing gadgets which you design. You run the program and it tells you “yes” or “no.” Reminds you somehow, about what…. About your halucinations? Of course!
So you think a little bit more about this analogy. Before you had a Hamiltonian and could verify yes or no whether it had a ground state which minized each term individually. Now you have a Turing table which answers yes and no. Might there be a connection. Oh, this is going to hurt your head.
A few hours later
You’ve been daydreaming about these Turing gadgets all day now. Days spent lounging on the beach beside your mansion since, yeah, your computer job hit paydirt. You imagine the big long Turing tape with some initial scribbles on it representing the problem your boss gave you. Then you picture the Turing machine hacking away at this tape, creating for each of its new steps, a new long tape, perhaps with new symbols, but at one time step later. You imagine in your head the entire history of the Turing tape. You imagine also along this tape a history of the location of the Turing machine along with a record of the internal state of the machine on this space time diagram.
How might it be possible to connect this to your Hamiltonian hallucinations? Might it be possible to write down a Hamiltonian which has or does not have a ground state which minimizes each term in the Hamiltonian individually but this is related to that spacetime moviescript you were just imagining? First you imagine that the spacetime diagram is now a space-space diagram. Then you think about your Turing table. It is a specification of valid transitions your Turing machine can make. And if you ponder that spacetime diagram made up of Turing tape cells and Turing tape locations and Turing tape internal states, you think: well I could give to configurations in my space-space which have entries on the table zero energy and all other configurations in my space-space not in the table (invalid evolutions) energy one. And I could do this in a totally local way on this space-space diagram, so that the Hamiltonian I would write down would be local and only act on a few subsystems. Further I could set up the Hamiltonian at the beginning slice of time, which is now, say, the bottom slice of space in my space-space picture, such that the intial state of the Turing machine and Turing tape and Turing location only has energy zero and other states have energy one. Since I know this initial state I can write down that Hamiltonian locally. Finally at some point we’ll want an answer so we can assign an energy penalty for being in that last terminating configuration and outputing “no”, but if we are in that output configuration and outputing “yes” we can assign no energy penalty. Easy enough.
Bam. Cool! So it is possible to take your Turing time which runs in polynomial time along with its polynomial sized input and convert it to a Hamiltonian which is local and acts on a few subsystems at a time. Then answering whether that Turing machine will answer yes or no is the same as solving that problem your halucinations solved ages ago when you were poor and a physicist. Of course answering that question without your powerful halucinations isn’t possible without the magic of you going into a trance, but still! I mean all this work your boss has been having you do is related to a problem about the existence of a certain type of ground state of a Hamiltonian. Take that(!) computer science day job that has been so lucrative.
“Deus Ex Machina” years later, graveside
Well it’s been a good life. Why can you still hear me, aren’t I dead? Yes I’m dead, but this is a blog post so why be constrained by reality? Cool, it now seems that the people gathered for my funural are all giving some speaches about me. Better listening in.
“He was an obstinate man. So obstinate I always thought that when Death came to claim him he would plug his ears so as to escape Death’s entropic death chant. And probably stick his tongue out at Death at the same time. What killed him? We shall never truely know, I think, and the story is so strange. A few of us were sitting around talking about the golden days of classical computers and somehow the subject of complexity classes came around. Here was an icon of the MySpace/Facebook Middle Internet peace agreement, and he had never heard of the complexity class NP. So we patientily explained to him the class of problems which could be efficiently verified in polynomial time. And how all the problems in NP could be converted to a problem called satisfaction having to do with the whether a boolean function had an input which evaluated to one. And that this satisfaction problem itself was in NP. Everything in NP was effectively a SAT and every SAT was in NP. And he sat there tapping his head. Chanting ‘SAT was my Hamiltonian problem.’ And then he kicked over dead. So strange. Does anyone know what this Hamiltonian problem is? What Hamiltonian path was he speaking of? It must have been an interesting problem. Because he died with a smile on his face.”
Inspired by the Optimizer’s ‘Physics for Doofuses’ whose opposite is clearly ‘Computer Science for Physicists’.

I Dream of Computation

I dream of computation.
Our generous universe comes equiped with the ability to compute. That it didn’t have to be this way is, of course, a truism. An unspeakable truism–for that it is this way, is what allows us to even ask the question of why it is this way. But down this rabbit hole I sometimes drop. And a haunted world I find. For what could be worse than a universe in which computation is not possible? What a nightmare world, one which allows for, at most, the basic order of the universe to transform impotently. No computers is an insignificant consequence. No life, now that is larger. Even grander, nothing digital nor coarsely analog that is not uniform and changable in ways which aren’t transparently tractable. So totally random or so ploddingly deterministic, with no blemeshes. For me this world seems like an absolute hell. The perfect embodiment of the boring. A universe made up of a solved problem.
And yet, dispite the depressing thought of realities without computation, even our universe is not so generous as to make computation the norm. Certainly, even if life is prevaliant all throughout our galaxy, large chunks of our universe seem devoid of computing anything at all interesting. When I look up at the night sky and the light of photons traveling years in my reference frame strikes my eye, what useful computation has been performed? The universe as a computer is the other truism, this one equally dangerous. For computation is more than just dynamics. More than just that photon going its merry way across the universe, refracting, reflecting, dying as it passes beyond my eye lash. Computation, or at least the kind of computation worth caring about, is more than just trajectories through spacetime, it is instead a robust movement of something signficant. Just because the universe is doing a calculation, if we cannot be privy to it, then it does not count as computation, at least for mere mortals like us. The lesson of chaos, the lesson of statistical physics, the lesson of fault-tolerant computation, is that that generous gift called robust computation is a rare one in our universe.
Which leads us to yet another alternative reality, pressing further along the ladder of possibilities. This universe is the one which is the opposite of a world in which computation is not possible. A universe in which all that exists is perfect robust computation. A world in which noise is not possible, everything is digital, and a robust computation. In this world that photon which travelled across the universe did perform a computation. And this computation will be robust to any inhomogeneity it has experience in its life. Those reflections and refractions which could previously lead it to wobble, unstable, on the brink of a computation, have now all been eliminiated, or at least immunized against. Indeed in this overly computational universe, your own path in life would be so immunized against interaction with things outside of yourself that it is not clear that competing computational entities could even arise in this universe. Every competing entity, if it is to obey our law of perfect robustness, must be part of yourself or it will possibly violate your own robustness. Indeed, we may as well call this the universe of the Borg, that great monolithic enemy in Star Trek, but where, by definition and in contrast to in Star Trek, everything has been assimilated.
Three universes, ours, a world where nothing computes, and a world where everything robustly computes. And what I dream about at night is, of course and in total abhorance of the part of me which hates any-centric reasoning, how wonderous our Goldilock’s of a universe seems to be. But what keeps waking me up from this wonderful dreaming, in this world were robust computation seems the true jewel, is whether this idea, this form of a computational universe, shows up in the heart of the theory of our universe or the theory of our computers.
Questions. What role does error correction play in the fundamental physics of our universe? What role does statistical physics play in our understanding of complexity classes? For both these questions we have peripheral answers, indeed when I pose these questions in this forum, I am immediately given by commentors examples of both contributions. But these are, at best, small hints. Is there a deep and wonderous connection, one which in a hundred years we will find as obvious as the idea that mathematics can be used to describe how our universe operates? Of all the universes I imagine, when I try to turn the deity dial from a universe with no computation to one with perfect robust computation, do I find a continuum or islands of stability, with our universe being one of these islands? Is the universe the way it is because of the properties of how any universe could possible support or not support computation?
Through nightmares of boredom and perfection fairytales, to perturbations around our own universe, I dream of computation. And wonder where this dream’s computation about computation will finally lead.

Robust Bacon

I’m currently visiting the Perimeter Institute for a workshop on fault-tolerant quantum computing. The workshop is great, but the coolest thing is that I get to wear a badge which calls me fault-tolerant:
ftbacon.jpg
Am I robust? I certainly don’t feel robust, indeed every day it seems entropy finds yet another way to sneak attack me 🙂

A Thesis Where?

A strange place to find a computer science dissertation: Walmart.
Update: Commentor Michael points to another interesting book for sale at Walmart. But who is this Iaac third author?

Computer Science. Is it Really About Computers?

From Bruce Sterling’s Wired blog: Computer Science. Is it Really a Science, and What’s It a Science About? with a link to The Great Principles of Computing. Debating whether computer science is science is sure to illicit elicit forth great gobs of passioned points and counterpoints. This will be followed, of course, by ad hominen attacks relating to the fundamental status of a particular discipline. Finally the whole thing will terminate with everyone going home in a big tizzy.
Me, however, I’ve been spending too much time in computer science theory land so I’m not worried about whether computer science is a science. Me, I’m more worried about whether computer science is really about computers 🙂