The Second Attitude

“If it’s not on the web, then it does not exist!”
Yesterday I went to the library for the first time in a long time. I had forgotten how interesting it can be to browse the shelves. I picked up a copy of Roger Penrose’s thesis “An Analysis of the Structure of Space-Time” (1969?) which has, so far, been a totally fascinating read. I have vague recollections of the importance of spinors in general relativity from the class I took from Kip Thorne, but at the time it hadn’t really occured that this could be more than a nice mathematical trick. Penrose really drives home how the employment of spinors, rather than tensors, for describing general relativity might be a more appropriate representation of space-time.
Also, in his introduction Penrose describes what is my favorite path towards reconciling quantum theory and general relativity:

The second attitude would be that quantum mechanics and general relativity cannot, or at least should not, be forced together into each other’s framework…that what is required is something more in the line of a “grand synthesis,” i.e. a new theory in which general relativity and quantum theory would emerge as different limiting cases, each applicable to its appropriate domain of phenomena, and in which, hopefully, semi-philosophical quantum mechanical questions as the meaning of an “observation” might be resolved. In fact, this…point of view is the one to which I would, myself more readily incline. But it is, for the present, possibly something of the lazy man’s way out, since it provides the relativist with an excuse for not tackling directly the substantial problems of quantization!

In physics, history has shown us many examples of theories whose validity in certain regimes breaks down when the theory is moved into a new regime. Sometimes the answer to resolving this is revolutionary (Why doesn’t an electron in orbit around an atom radiate away all it’s energy? The Bohr atom and then quantum theory!) and sometimes it is not as revolutionary (How do we explain the weak force? Fermi’s theory seems fairly good but it is not renormalizable. Do we need to talk about nonrenormalizable theories? No Glashow-Weinberg-Salam theory is renormalizable! We just had the wrong theory!) What astonishes me about the theoretical physics community is just how much is invested in the nonrevolutionary point of view: that it should be possible to “quantize gravity” (either string theory or loop quantum gravity.) There are only a few crazies (t’Hooft and Penrose, for example) who seem to be persuing Penrose’s “second attitude.” Part of the reason for this is dictated by the success of the traditional program: we’ve bagged electrondynamics, the weak force, and the strong force. Since in all of these cases we successfully quantized a classical theories, it seems reasonable to suggest that the “final” classical theory, gravity, should also fall to the quantization gods. But historical success does not the future guarantee! And so I will joyously spend too much of my time dreaming up ways to derive quantum theory and general relativity in the respective domains!

What We Are

We are the hollow men
We are the stuffed men
Leaning together
Headpiece filled with straw. Alas!

from T.S. Elliot’s The Hollow Men

What is a Qubit?

A question which I spend way to much time thinking about is what, exactly, is a qubit? Sounds kind of silly, doesn’t it. So let me explain.
A qubit, of course, is the most basic unit of quantum information. It is a two dimensional quantum system. Pure states for a qubit are superpositions of |0> and |1>: a|0>+b|1>. Mixed states are two dimension positive hermitian matrices with unit trace. In older times a qubit was known as a two level system or a pseudo spin one-half system. Now we just say qubit. It’s shorter and cleaner and reminds us of the Bible.
The first thing we learn about qubits, is that a qubit is different than a bit. A bit is either 0 or 1. Sometimes we can prepare a qubit such that we always get a 0 or a 1 when we perform a two outcome measurement on the qubit, but not always: when we prepare the same state there are generically probabilities of either 0 or 1. So then we think, well maybe a qubit is just a probabilistic bit, like something you see when studying information theory?
But this starts to fall apart pretty quickly. Why? Suppose you start a quantum system in the |0> state. Now if you apply the Hadamard operation to this quantum state, and measure the system in the |0>, |1> basis, you get 50% chance of outcome 0 and a 50% chance of outcome 1. Apply the Hadamard again and when you measure you always get 0. The first strange thing about this is that you have done the same thing (the Hadamard) to the state and gone from 100% 0 to 50% 0, 50% 1 to 100% 0. If you think about the qubit as a probabilistic bit, then you would like to map the Hadamard operations to the same Markov process on the probabilistic bit. Clearly this can’t be done. OK, so that’s a bit strange. So it’s not just a probabilistic bit if the Hadamards we apply are indeed the same Markov operaiton on the probabilistic bit. Now if you remember something about quantum theory, you remember that a measurement disturbes the state. So when we try to perform the measurement to follow the above experiment, we get something different. Indeed, we get 100% 0 to 50% 0, 50% 1 to 50% 0, 50% 1. And this is indeed can be achieved by a Markov chain. So if we accept that things done between measurements are what we care about then Hadamard is different from Hadamard Hadamard, and we are doing fine. But this is really strange. The system can’t always be a probabilistic bit during the entire course of the process. How do we know that a future Hadamard is going to preformed or a measurement is going to be preformed? Perhaps there is a cosmic conspiracy such that we can’t perform such and operation, but I hate to put God in initial conditions.
Well, so far we know that a qubit is not a bit (though it can act like one sometimes) nor is it a probabilistic bit (though it can act like one one sometimes), so what is a qubit?
At this point it is nice to mention the usual way people get around the whole question of what a qubit is. They say loudly “shut up and calculate.” In this view, it doesn’t matter that the system is not a probabilistic bit for the entire evolution: we just use quantum theory to take the whole process and calculate the probabilities of outcomes. A spin off of this view, is that quantum states are epistemic: they represent our knowledge about the system. So a qubit a|0>+b|1> is thus a way of representing our knowledge. This clearly has a large component of truth in it and indeed it should be seriously considered that this is a totally consistent position. But it makes me nervous.
Indeed this view is really like a third view of a qubit: that it is a Bloch vector. We can take a pure state a|0>+b|1> and say that the qubit is the two complex numbers a and b. This view says that a qubit is the quantum state. But now there is something strange about the emergence of probabilities. We say a qubit is a and b, operations manipulate a and b, but now when we make a measurement we never get a or b but only 0 or 1 (projective measurements here silly objectionist) and with probabilites which depend on a and b in a particular way. So a qubit, which was just two determined numbers changes gives rise to probabilities. Why the probabilities? Of course, you can always respond “that’s just the way it is!” and this is fine, but it feels a bit empty.
To understand why I consider this solution a bit empty, I will have to confess something shameful. The question I’m asking, “what is a qubit?” is really motivated by the fact that I am a devoted realist. To be a realist as a physicsist is a bit like saying you’re skeptical of Paul’s teachings in the Catholic church. Quantum theory may be a theory of knowledge about the system, but the question I want to understand is knowledge about what? So when I think about a qubit as the numbers a and b, I get disturbed: if these are the real properties why do we get out the probabilities. Now what bugs me here is not the nondeterministic nature of quantum theory, I don’t care a rats ass about the laws being probabilistic. But what does bother me is that if we say a qubit is these deterministic numbers a and b, why do we get probabilities. You haven’t really answered my question of what is a qubit. Just in passing, to bolster my ego and raise my crankpot level, I will note that Einstein had very similar views. Despite the God and the dice comment, it seems clear historically that the nondeterminism of quantum theory didn’t bother Einstein (note that I am avoiding talking about what did bother him!)
So what is a qubit? I think the answer to this should have a few properties. What I’m looking for is a way to describe a qubit such that the probabilities of measurement outcomes arise due to our ignorance about the realistic description when we measure the qubit. I want to think about a physical process or a computation or a realistic description which, when we describe measurement on this system, we intrinsically cannot gain access to all of the information in the description and this lack of knowledge gives rise to exactly the probabilities. Fundamentally, I would like a combinatorial understanding of where the probabilites in quantum theory come from.
Now, of course, you might say this is all really really silly because you know all about Bell’s theorem. Bell’s theorem states that there is no local realistic description of quantum theory. What this means is that if I find a realistic description of a qubit then I will have to make a nonlocal theory to explain two qubits. Nonlocality is heresy in Church Physics, so we are supposed to accept that thinking about realistic description is wrong. But I don’t give a rat’s ass about nonlocality. If you don’t signal, what’s the big deal? There are no inconsistancies, no time travel paradoxes, etc. Quantum theory is just one such nonlocal no signaling theory: surely there are others!
But now the strange thing happens. If I really do find a realistic description of a qubit and understand how the ignorance must give rise to probabilities, what happens when I consider two qubits? The realistic description must be nonlocal: it should consist of entities which are not at point A and at point B but instead are in some sense at both! And now we have another mystery: if the realistic entities are at both points, why can’t we signal between these points. So a realistic description must show that the measurement, ignorance, and probabilities done at the different points cannot manipulate the realistic description to send signals.
My current favorite way out of the nonlocality problem is definitely a strange solution: it is to get rid of the notion of spacetime. Spacetime, I believe, will be an emergent property of a combinatorial object which doesn’t have an inherent notion of spacetime. There will still be a notion of causally local, but this will not necessarily correspond to the notion of locality in the emergent spacetime. So this is why I want to understand what exactly is a qubit? I am looking for a realistic description which is compatible with the combinatorial object giving rise to spacetime.
So what is a qubit? Beats me, but looking for it sure beats having to do my laundry.

New Car Go Far

Goodbye QUBITS version 1.0
and hello QUBITS version 2.0
I will now do that sad thing where I mimic becoming a father, but for a car, and with ownership to boot: I am now the proud owner of a Subaru Impreza Outback Sport. That’s right, no more excuse for why I can’t drive cus I can only take one passenger. Bah!

Quantum Self Promotion

And now, coming to an arXiv site near you:

Simulating Hamiltonian dynamics using many-qudit Hamiltonians and local unitary control
Michael J. Bremner, Dave Bacon, and Michael A. Nielsen
When can a quantum system of finite dimension be used to simulate another quantum system of finite dimension? What restricts the capacity of one system to simulate another? In this paper we complete the program of studying what simulations can be done with entangling many-qudit Hamiltonians and local unitary control. By entangling we mean that every qudit is coupled to every other qudit, at least indirectly. We demonstrate that the only class of finite-dimensional entangling Hamiltonians that aren’t universal for simulation is the class of entangling Hamiltonians on qubits whose Pauli operator expansion contains only terms coupling an odd number of systems, as identified by Bremner et. al. [Phys. Rev. A, 69, 012313 (2004)]. We show that in all other cases entangling many-qudit Hamiltonians are universal for simulation.

Twenty Nine

Happy Birthday to me!
Rough year, that 28th: dad died, sister’s kidney is failing, family dog put to sleep, the list goes on and on. However, as anyone who has gambled knows my 29th year can’t be nearly as bad as my 28th. Oh wait…doh!

On Quantum's Universality

Often when I am thinking about the foundations of quantum theory, I am struck by the universality of the theory. Quantum theory (or its related cousin, quantum field theory) applies generically to all physical systems (disregarding the transition to some “classical” theory and of course, difficulties with both QCD and gravity.) Thus we apply quantum theory to our basic theories of physics, electromagnetism, the weak force, the strong force, but we also apply quantum theory to simple atoms and complex molecules, to single electrons and electron gases in metals, etc. Quantum theory is the universal language we use to describe any physical process. If we are thinking about ways to explain quantum theory, then this universality is a bit mysterious: the explanation had better apply to all of these different physical systems and that seems like a lot of work! Of course, this reasoning is flawed: it seems the universality is an illusion. The reason we can describe a complex molecule by quantum theory is that the fundamental constituants of that molecule obey quantum theory. Separation of different energy scales (and other scales, like localibility) allow us to ignore some of the constituants details, and the complex system behaves like a quantum system. So really any explanation of quantum theory need only apply to some basic level of physics (where this level is I refuse to speculate.) While quantum theory appears mysteriously universal, this is an illusion for those persuing understanding the mystery of the quantum.

Sir Real

A formal deal ending the war is expected in the next few weeks, possibly sooner. Since President George Bush is widely seen as the architect of peace, he is perhaps more popular in southern Sudan than anywhere else on earth. At the Rumbek sub-chief’s election one young warrior called Thuapon leaps frenetically in the air, proudly waving a white Barbie-doll in a pink dress. “This is a new wife for President Bush. May God grant him many fertile women with firm bodies and an election victory without problems in Florida.” The Economist, May 13, 2004

I Doubt It

I hold that doubt is essential for the discovering and the understanding of the Truth…examine yourselves by that and scrutinize the very knowledge which you are supposed to have gained. For I tell you that orthodoxy is set up when the mind and heart are in decay…But when you invite doubt, it is as the rain washes away the dust of tradition, which is the dust of ages, the dust of belief, and leaves you certain of those things which are essential- one-time-messiah turned guru-in-denial J. Krishnamurti

Focus, People!

Last week I attended a workshop sponsered by DARPA on “Scalable Quantum Information Processing via Error Control.” The idea behind the workshop was to bring together theorists who know something about error control with experimentalists who know all about different proposed quantum computing implementations and examine the feasibility of each of the different implementations in light of the requirements which arise from error control. It’s been a while sense I attended a workshop with “real physicists” (a.k.a. experimental physicists and their physics calculating theorist brethren.) The implementations covered were “Ions and Neutrals”, “Superconducting”, “Spins in GaAs”, “Spins in Si, Si/Ge”, “NMR”, “Linear Optics”, and “Electrons on Helium.” DARPA has recently said that a new program “FoQuS” will be starting and will be narrowing down the field of DARPA funding for the different implementations. Needless to say this has caused a lot of stress on those currently receiving DARPA funding.
Here are some of my observations from the workshop.
Ion traps rock. If were a starting graduate student who wanted to do quantum computing and do some really rocking quantum computing experiments during my graduate career I would make a dash straight towards an ion trap quantum computing group. The era of NMR is over and a new era of Ion trap quantum computing has begun! Why do I say this at the expense of the other possible implementations? First of all, the ion trap people have already successfully show the basics of coupling and manipulating qubits and they can do these with good to great fidelity. One great advantage they have (and share with other AMO proposals) is that they can do high fidelity, fairly fast (approx 10x two qubit speed) measurements. Second, they have really nailed down exactly what is going on in their system: what are the decoherence times and mechanisms, what are the heating rates when you move ions, etc. Third, ion traps have always been questioned due to their scalability and thus there has been a lot of thought in the ion trap community about how to fabricate traps for which will realize concrete quantum computing architectures. The recent demonstration of teleportation in ion traps, I think, is the beginning of a long line of beautiful protocols for multiqubit quantum protocols.
Locality has been too often ignored. The threshold for fault-tolerance is hard to analyze when you restrict yourself to particular geometric architectures (with the important exception of calculations done with toric (surface) codes which have a local quantum structure and a nonlocal classical structure. Here the thing we would like to get around is the ancilla state preparation factories which require the ancilla states to be swapped into the surface codes: this is just the old locality issue again, but in a much more tame setting. Also the surface code setup is very nice for three dimensions, but difficult to imagine in two dimensions.) If you are going to use a concatenated structure for error correction, then you really need to sit down and think about flying qubits. If not, it seems you might be in deep trouble when trying to construct your architecture.
Fitting it all together. Many implementations will have serious technical dificulties when you try to lay the control circuitry on a realistic architecture: for some spin implementations this may cause serious problems.
Superconducing qubits visibility Superconducting qubits have this (not understood?) property that they don’t get a high visibility when they do single qubit Rabi flopping. Of course there are two reasons for this to be occuring: one is that the state was not properly prepared and the other is that the measurement is not high fidelity. Until this visibility problem is well understood, superconducting qubits may be in trouble.
When things take off? In the next few years, implementations will be working on implementing quantum error correcting techniques. So, suppose you implement the five qubit quantum error correcting code. What should we expect to see? Well since these experiments will most probably be below the memory threshold, the effect of the circuit in terms of real fidelity will be to make things worse! So we will have some years where we start measuring how badly we are making things worse. What I can imagine happening is that there is a group of people who will work on these small codes on their systems and constantly improve them until they pass the threshold. Simulatenously I expect a number of people will work on architectural scalings for the implementations. And when these two meet their goals, I’m a relative optimist that all hell will break loose and we will see quantum computing being scaling in a remarkable fashion. When, then, will breakeven be reached? And how do I say this without invoking images of fusion?
State of the implementations. My impression of the state of implementations is as follows. NMR quantum computing in liquid state has reached its terminus. Ion traps will be taking the lead where NMR quantum computing has left off. I expect the ion traps to press the next four or five years of the quantum Moore’s law. Quantum dot and superconducting qubits are at the stage where they need to pen down the characteristics of their systems. It was impressive to hear that first attempts at single shot measurements in quantum dots have achieved 64% measurement fidelity. I’m a bit worried about the Kane implementations, and I’m sure there not at the ion trap level. Thus I’d say most solid state implementations are a few years (like 3 to 5 years) behind Ion traps. Linear optics quantum computing is the wild card in the whole picture. I worry most about the requirements of state preparation in linear optics quantum computing. I worry also about the device complexity, but this doesn’t seem an insurmountable barrier because I’m not an engineer or an experimentalist. Is mode matching a killer? I know the least about electrons on Helium, but I get the impression that they are all on the cusp of demonstrating two qubit interactions. They will then have to begin the quantificiation process a la ion traps. Implementations with neutrals also fit in somewhere, but I’m not quite sure where. I have always been shocked by the lack of experimental progress in neutrals: the number of quantum optics people should have lead to some nice results by now, but I haven’t seen this (but I claim no authoritative status as I’m just a lousy uneducated theorist!)
In all it was a fun workshop. The talks were super short, but the conversations after the presentations were at times very interesting. What was amazing to see was to watch an expert in ion traps talk to an expert in superconducting qubits and other such cross disciplinary conversations. Normally these two wouldn’t give the time of day to each other, but through quantum computing there is a common language. And not just a common language, but also a common set of problems with many common solutions. Quantum computing is so multidisciplinary it is scary. But it’s also the reason it is such a beautiful and exciting field.