A Serious Version

A resubmission, quant-ph/0507189. Much, much, more serious:

Title: |0>|1>+|1>|0>
Authors: S.J. van Enk
Comments: A more serious version, almost 2.36 pages, but still an unnormalized title
Note: replaced with revised version Thu, 11 Aug 2005 17:17:52 GMT (6kb)

Quantum Algorithms People

I’m putting together a talk right now and I was trying to make a list of people who have worked in the past or are now working in the field of quantum algorithms. Below the fold is the list I have right now. If anyone in the know spots someone I miss please let me know (and apologies in advance for those who I’ve left out!) This list is a modified verision of a list of people stolen from Wim van Dam’s home of the homepages. (Thanks for doing the hard work Wim!)
Continue reading “Quantum Algorithms People”

Warning: Negative Information Ahead

The paper quant-ph/050562 has been out for a while now, and I kept meaning to comment on it, but somehow never got around to it. Now it’s hit big time. This month in Nature has the article article, “Partial Quantum Information” by Michal Horodecki, Jonathan Oppenheim and Andreas Winter, along with the a nice intro written by Patrick Hayden. Johnathan Oppenheim also has a really really good popular explanation here . Normally I would write a little explaining this result, but Patrick’s intro and Johnathan Oppenheim’s web page are so good that, well, you should just get your info from the horse’s mouth (so to speak.) But I will tell a story related to this work.
When I was an undergraduate I spent the summer between my junior and senior years working for Nicholas Cerf and Chris Adami. Specifically I was working on trying to find quantum algorithms to efficiently solve NP-complete problems. I call this my summer spent banging my head against the wall. Not a very effective summer, research wise, but I learned a lot about quantum computing over that summer. Anyways, Cerf and Adami worked for Professor Steve Koonin (I think they were both postdocs, but I’m not sure if I remember this correctly.) When I was a freshman at Caltech, I was wandering around campus a few weeks after I showed up on campus, and I saw these two guys having a really animated conversation outside on a bench. One of the guys was really really tall, and the other guy was fairly short and looked EXACTLY like Steve Rick Moranis. I mean exactly. This being Los Angeles, and me being a hick from the sticks, I was only a few feet away from asking the shorter guy for an autograph, when I chickened out. Which is a good thing, because it turns out that this guy was none other than Steve Koonin! (Koonin, by the way, was an undergrad at Caltech. He went to MIT and got his Ph.D. in three years. Three years! Now that’s the power of a Caltech education 😉 ) Some of you will find it funny that the second guy on that bench, I later learned, was Jeff Kimble. Many of you will know Professor Kimble from his demonstration of a quantum logic gate in a cavity QED system in 1995 (as well as a plethora of other cool cavity QED results.)
Back to the story at hand! So I worked for Cerf and Adami. At the time, they had this really strange paper (see here and here) where they talked about negative quantum information. In particular they noticed that if you took the conditional Shannon entropy and converted it over to the quantum mechanical conditional von Neumann entropy, then while the classical conditional entropy was always positive, the quantum mechanical conditional entropy could be negative. For example, a Bell pair had a conditional von Neumann entropy of negative one (bit). This was strange, and one question that their work raised is what exactly this negative number might mean. Another question along these lines arose a little later when the conditional entropy showed up in the quantum channel capacity. Here, however, when the conditional entropy showed up as negative, the capacity was set to zero (creating the so-called mutual information.) Having a negative capacity for a noisy quantum channel didn’t seem like a reasonable thing! What the current work describe above does is to actually given an operational meaning to this mysterious quantity the conditional quantum entropy. By operational, this means that those negative numbers now have an intpretation in terms of a protocl which operates at a rate related to those negative numbers. Read the article to see how this all works out. It is very nice. The author promise a more detail paper (with a date stamp of 2005, so soon) which I’m looking forward to.
(Thanks to Saheli and Grant for pointing me to Johnathan’s web page. Grant, who is a student in the class I am teaching, emailed the class mailing list with links to the negative quantum information web pages, and the comment “It’s possible we may leave this class knowing less than when we started!!!” Hopefully, the same effect hasn’t happened to you after reading this post 😉 )

Foundations of Quantum Theory, Quantum Gravity, and Quantum Computing

On a couple of blogs (Not Even Wrong and LuboĹĄ Motl’s reference frame) a question has creeped up which is what role studies of the foundations of quantum theory have in a future theory of quantum gravity. At the Strings2005 conference in Toronto recently, this question was raised during a panel discussion. When someone claimed that foundations hasn’t contributed anything to physics, Lee Smolin apparently said something to the effect that study of foundations of quantum theory has given us quantum computing.
It is true that the original thinkers about quantum computers, and in particular I’m thinking about David Deutsch, where inspired by interpretation issues in quantum theory. But I think the relationship between foundational studies of quantum theory and what is now quantum information science is a bit different. I think that foundational studies have played the role of a “clarifier” in quantum information science. Instead of many results being motivated by foundational issues directly, studies in foundations have lead those in quantum computing to a better understanding of what quantum theory is and what it is not. I certainly think that banging my head up against the different interpretations of quantum theory has given me a very good grasp of the limits of what quantum theory can say and what it cannot say. Thus I think that quantum information science has benefited immensely by listening to the foundation crowd and learning just exactly what is so different about quantum theory. So, while the interpretations themselves don’t have any gigantic successes (one could argue for smaller ones!), I think they are an essential base which betters the field of quantum computing.
Now back to quantum gravity. Whether or not the foundations of quantum theory has anything to say about quantum gravity, I think, is a question we can debate until the cows come in. There are certainly very good philosophical points of view that the strain between gravity and quantum theory must be broken in some nontrivial manner, and whether we break quantum theory or gravity is, I think, an intriguing question. But if we take quantum computing as an example, the lesson that may be learned is that by careful study of quantum theory you can gain an appreciation for it which might allow you to make progress in forming a quantum theory of gravity. I must say that I am often shocked by the lack of understanding of the details of quantum theory among high energy physicists. Sure they use it every day. But do they really deeply get the theory and what it can and can not do? Of course this is a very prejudiced statement coming from a very prejudiced quantum computing dude. We hold quantum theory fairly sacred and hate to see it abused. I’m also sure that high energy physicists are greatly pained by quantum information scientists lack of understanding of “real” physics!
My person views on the relationship between foundations and quantum gravity are about as close as I get to pseudoscientific gobldy-gook. I gave a talk on those views once. It was supposed to be one hour and it ran to two. Sometimes I contemplate writing a book about these views… Penrose did it, so why can’t I? 😉

Quantum Computing For Dollars

One of the students in the class I am teaching this summer, after we had covered Simon’s quantum algorithm, had this interesting encounter:

I met Dan Simon at Pro Sports Club in Bellevue yesterday and here is the short dialogue we had:
Student – “Your algorithm is giving me so much of a headache this weekend!”
Dan Simon – “You know, I really apologize: I was young, needed the money, I just had to do that…”

Nice.

A Mere Five Orders of Magnitude

After the cool PRL describing high visibility for a superconducting qubit, today I find in Physical Review Letters the article “Long-Lived Qubit Memory Using Atomic Ions” by Langer et al (volume 95, page 060502). This paper describes work at NIST (Boulder) with trapped Beryllium ions. This is a cool paper where they achieve coherence lifetimes for their qubit in excess of 10 seconds using two techniques: one involving tuning of an external magnetic field to obtain a sweet spot in the energy levels of their qubit and the other using a decoherence-free subspace encoding of the quantum information (this latter actually leads to slightly less than 10 seconds of coherence)
One of the main sources causing decoherence for ionic qubits comes from ambient fluctuating magnetic fields. In many implementations of qubits in ions, the energy levels used for the qubit are sensitive to magnetic fields. Stray magnetic fields cause these energy levels to fluctuate and this causes phase decoherence of the qubit. The trick which this paper reports on is to tune an external magnetic field to a percise point (0.01194 Tesla, the Earth’s magnetic field on the surface of the earth, for comparison is around 50 microTesla) where the energy difference the two energy levels used for the qubit have no first order dependence on the magnetic field (but do have a higher, second order dependence on the magnetic field.) Similar tricks have been used in neutral atom systems, in particular in Rubidium. But there these manipulations where done by microwaves and with large numbers of atoms. Further there are other problems (perhaps not killer, but they are there) for using these qubits in quantum computers. One problem is that using microwaves may eventually not be a practical way to build a quantum computer because they are hard to focus and other suggested techniques, like apply a magnetic field gradient to distinguish between qubits, may have other destructive overheads . For these reasons, this technique with a Berrylium ion qubit are very cool. What is really nice is that the authors obtain an increase in five orders of magntiude for the lifetime of this qubit over their previous experiments with this qubit. Nothing like a good five orders of magntidue to make my day. (Oh, and for those of you who care about these things, they quote this as a memory error rate per detection rate as around 10^(-5), below some of the good old fault-tolerant thresholds for quantum computation.)
The other technique the authors use to obtain long lifetimes is to encode their qubit into a decoherence-free subspace (DFS). Here the idea is to encode into one qubit into two qubits such that these logical qubits are effected equally by uniform (over the physical qubits) magnetic fields. Using DFSs to protect quantum information in the ion traps had previously been reported. In fact it helped me get my Ph.D. When I was giving my qualifying examine and explaining what a a DFS was, one unnamed experimentalist asked (roughly) “This is all good and fine, in theory, but does it correspond to the real world?” Luckily my next slide was a slide on the DFS demonstrated in ion traps by Kielpenski et. al. Booyah! In this paper the authors encode information into the DFS and then cause oscillations between the two levels of the qubit by applying a magentic field gradient. Since the DFS states are |01> and |10> this basically means that the system is almost always in an entangled state. The lifetime for this entangled state oscillation is measured to be around seven seconds!
Update: Right after I posted this, I read quant-ph and found, quant-p/ 0508021 “Robust Entanglement” by H. Haeffner et al. which reports on ion trap experiments in Innsbruck. Here they demonstrate lifetimes for entangled quantum states that are twenty seconds long in their Calcium ions. How cool is that!
Update update: Some press here

High Visibility In Superconducting Qubits

One of the more attractive approaches to building a quantum computer are the various proposals which utilize superconducting electrical circuits. One of the benefits of using superconducters to build a quantum computer is that it is expected that scaling up from a few to many qubits might be easier because of the advanced state of the fabrication for superconducing circuits (Although it must be said that “fitting everything together” will certainly still be a challenge. But it does seem like a slightly easier challenge, than, say loading 1000 ions into separate ion traps.)
One of the interesting problems with superconducting qubits has been that no one has been able to demonstrate high visibility of the qubits. One question which is particularly acute for solid state qubits is whether they can be sufficiently isolated from their environment to act as truly two level systems. Interestingly, some of the early experiments with superconducting circuits, while they demonstrated Rabi flopping of a single qubit, these experiments weren’t able to get high visibility of this flopping. What this means is that instead of observing the the qubit system flopping back and forth between 100% population in one of the qubit states to 100% population in the other qubit states, the experiments observed, say 100% in one state and then, say 30% in the other state. And in these experiments, while the flopping was not between 100% |0> and 100% |1>, after this initial reduction to, in our example, 30%, the qubits then Rabi flopped with a pretty slow decay. Thus it seemed that there was a “visibility” problem: the qubits were probably oscillating properly, but something in the scheme was causing the measurements to not see this full oscillation. Hence there was a “visibility” problem.
Now in todays Physical Review Letters, A. Wallraff et. al. from Yale, report on superconducting experiments in which they achieve nearly 95% visibility in Rabi flop measurements of superconducting qubits! The trick which these authors use is to couple their (charge) superconducting circuit to, basically, an electromagnetic cavity. Thus what these experimentalists are able to achieve is a superconducting qubit which can be strongly coupled to a quantum electrodynamic cavity mode. They then use this nice pure coupling to perform a measurement on the superconducting qubit with the beautiful result that they really high visibilities.
This is very exciting news. Ion trap proposals for quantum computers can still obtain higher visibility in their experiments, but this movement from visibilities less than 50 percent to 95 percent is an awesome jump. I can’t wait until they get this up to 99 percent and then to 99.9 percent. Then we will be rocking!

Quantum Computer Dollars

How much would you pay for a quantum computer?
Of course, it depends on exactly what this quantum computer can do, doesn’t it! If I give you a two qubit quantum computer, you may not want to pay me more than two bits (25 cents people, not two binary numbers.) Which is not to belittle the experiments that have been done to date which are few qubit quantum computers…these are among the most impressive works of experimental physics/engineering around. But I certainly wouldn’t pay much for the computational power these experiments demonstrate.
There are sort of two regimes where I think someone might actually want to buy a quantum computer. The first is when a quantum computer with around 100 qubits or so which can process some thousands of parallel operations before the computer decoheres/errors. Why would I be interested in such a machine? Well because I have no idea how to efficiently simulate some quantum systems of this size. Why do I go up to 100 qubits and not as some smaller number like 20 or 30. Certainly simulating quantum systems of this size is difficult. However, the systems which we would really like to use a quantum computer to simulate, those with a large amount of entanglement, are probably two (or higher) dimensional systems, and getting to a two dimensional system of ten by ten seems like a regime where I can at least begin to rid myself of some small finite size effects.
The next step, of course, is a full scale quantum computer, one which is operating below the threshold for fault-tolerant quantum computation. What price should we assign such a device. Again it depends on the exact specs. But let’s just assume that this quantum computer has a few kilobytes of quantum memory. What will the clock speed of our quantum computer be? Well it will certainly depend on the physical implementation. And there is the overhead of quantum error correction. So the clock speed may range anywhere from MHz, to even PHz. How much would you pay for such a quantum computer?
For comparison, IBM’s Blue Gene, the worlds fastest supercomputer (that we know about) today, cost around one hundred million dollars.
Let the bidding begin!
The qBabbage: 100 qubit quantum computer, with the ability to perform, say 1000 operations before decoherence/noise ruins a quantum simulation. Start bids at 10 thousand dollars.
The qMark I: A fault-tolerant quantum computer with 2 kilobytes of quantum memory and a clock speed of MHz. Start bids at half a million dollars.
The qWhirlwind: A fault-tolerant quantum computer with 2 kilobytes of quantum memory and a clock speed of THz. Start bids at one million dollars.

OMG My Classical Probability Distrubution Collapsed!

Scott Aaronson has a nice diatribe “Are Quantum States Exponentially Long Vectors?” which he’s posted on the arXiv as quant-ph/0507242. In this note he discusses his own personal counter to certain objections to quantum computation. It’s a very nice read.
My favorite part of the article is where Scott comes out as a full on epistemologist:

To describe a state of n particles, we need to write down an exponentially long vector of exponentially small numbers, which themselves vary continuously. Moreover, the instant we measure a particle, we “collapse” the vector that describes its state—and not only that, but possibly the state of another particle on the opposite side of the universe. Quick, what theory have I just described?
The answer is classical probability theory. The moral is that, before we throw up our hands over the “extravagance” of the quantum worldview, we ought to ask: is it so much more extravagant than the classical probabilistic worldview?

To which I can only say “Amen, brother!” I think physicists, in particular, are VERY bad at understanding this argument.
Suppose we want to write down a theory which describes the state of classical bits. One can certainly pretend that the classical bits are always in some definite state, but now ask how do we describe the state of our classical bits when we carry out an operation like, flip a fair coin, and conditional on the outcome set a bit to zero or one? We then need probabilities to describe out classical set of bits. If we have n classical bits, then the probability vector describing such a classical system will be made up of two to the power n numbers (the probabilities.) The number of numbers needed to describe a classical n bit system is exponential in the number of bits! So should we be surprised that quantum computing requires states described by an exponential numbers of complex amplitudes? Doesn’t seem as surprising now, does it?
And there are a bunch of other similarities between probabilistic computation and quantum computation. If we measure such a classical system, we certainly get one of the bit strings, and out description immediately changes to a probability distribution with only one nonzero entry: the probability distribution collapses. Similarly if we perform a single measurement, we don’t learn the probabilities themselves, i.e. we don’t learn these (real) numbers describing the classical state.
Another interesting analogy (which can only be pushed so far…and this is the real interesting part!) is with correlated bits. Suppose I flip a fair coin and if the outcome is heads I put two bits which are both zero into two boxes. If the outcome is tails, I put two bits which are both one into two boxes. What is our description of the classical probabilistic state of these two boxes? We say 50% 00 and 50% 11. Now carry these boxes to the far ends of the universe. Open one of the boxes. Well, opening this box, I immediately know that whatever is in this box, well the other bit, on the other side of the universe, well it must have the same value as my bit. Communication faster than light? No! Correlated bits? Yes! As a global observor, we can update our description of the system after a measurement by appropriately collapsing the probability distribution. Notice that until information is communicated about the measurement from one party to the other, the left out party can’t change his/her description of their system (or of the global system). Quantum entanglement is a “bit” like this…but the surprising thing is that it turns out to be different! How different? Well this is the subject of Bell’s theorem and, needless to say the end result is one of the essential differences between classical probabilistic computation and quantum computation. But the fact that quantum theory is a consistent way to describe probability amplitudes is directly analogous to the manner in which classical probabilistic description work!
There are even more similarities between quantum computation and probabilistic classical computation. For example, there is a classical analogy of teleportation. It works out to be one time pads!
Notice that to get these interpretations of the similarites between classical probabilistic computation and quantum computation, we need to adopt a particular stance towards quantum theory. This is the epistemological view of quantum theory. In this view, roughly, the wave function of a quantum system is merely a description of a quantum system. It is not, say, like the classical position of a particle, which is a real number which we can really assign as a property of that classical system. I must say that I find myself very much in tune with this view of quantum theory. This does not mean, however, that this point of view totally solves all the problems people have with quantum theory. In particular, the problems of contextuality and no local hidden variable theory remain “troubling” and the question of “a description of WHAT?” is roughly the measurement problem. I certainly think that among quantum computing theorists, roughly this point of view is gaining more and more adherents. Which is good, because any mention of many worlds is destined to make me go crazy!
As a side note, when I started teaching the quantum computing course this summer, I attempted to teach quantum theory from the epistemological point of view. Unfortunately, the pace I set was too fast, and so I had to change tactics. But it certainly would be interesting to try to teach quantum theory from this perspective.
A final quote from Scott:

For almost a century, quantum mechanics was like a Kabbalistic secret that God revealed to Bohr, Bohr revealed to the physicists, and the physicists revealed (clearly) to no one. So long as the lasers and transistors worked, the rest of us shrugged at all the talk of complementarity and wave-particle duality, taking for granted that we’d never understand, or need to understand, what such things actually meant. But today—largely because of quantum computing—the Schr¨odinger’s cat is out of the bag, and all of us are being forced to confront the exponential Beast that lurks inside our current picture of the world. And as you’d expect, not everyone is happy about that, just as the physicists themselves weren’t all happy when they first had to confront it the 1920’s.

Which I really like, but I must take issue with. It’s all the physicist’s fault for not clearly communicating?! I don’t think so. I think computer scientists were too busy with other important things, like, say inventing the modern computer and building modern complexity theory, to even bother coming over and talking with us physicists about quantum theory. Just because you weren’t paying attention doesn’t mean you get to say that physicists weren’t communicating clearly! Notice that it was basically three physicists, Benioff, Feynman, and Deutsch, who first really raised the question of what exactly a quantum computer would be. Of course it took computer scientists, like Bernstein, Vazirani, Simon, and Shor to actually show us the path forward! But I think someone just as easily could have thought up quantum computation in 1950 as in 1980. The reason why it took so long to dream up quantum computers probably has more to do with the fact that no one, physicists or computer scientists, could really imagine doing the kinds of experiments which quantum computers represent. Of course, none of this really matters, but it’s fun to yell and scream about this and pretend that it makes some sort of difference, when really its just fun and funny.

Make It Planar

Steve Flammia points me to this cool game. Well at least it’s cool if you are the computer science type.