Over In Nonlocality World

Nonlocal determinism implies local indeterminism.

In a universe which evolves nonlocally, a localized observor does not have access to enough information to correctly predict his deterministic evolution. This ignorance will lead to local laws which are probabilistic due to the ignorance of the nonlocal information. In such a universe there are two mysteries: (1) why no signaling? and (2) why quantum theory is a good description of the probabilities arising from the ignorance of nonlocal information? Further, this interpretation amounts to an untestable hypothesis, unless the answers to (1) and (2) are not exact.

Mistaken Identity

For those of you who keep not recognizing me at conferences.
Old Dave:
New Dave:
Notice that the key difference is the necklace.
Crazy Dave:

Homer Jay Simpson Redux

After reading the comments for the Homer Jay Simpson puzzle and getting an email from Ken Brown, I realized that the real puzzle I was thinking of was “into how many pieces can one cut a torus using three planes.”

The Superphysicist Myth

Physicist like to boast that a main benefit of their curriculum is that it teaches “problem solving skills” and that this means that a physicist can jump into just about any field and quickly get up to speed, cut to the heart of the problem, and then solve the problem. So why do so many theoretical physicists become so specialized?

Those Pesky Quantum Circuits

The righteous Steve Flammia and Bryan Eastin have written a nice LaTeX package for quantum circuits: Qcircuit. Considering that I usually use psfig to painfully draw up circuits for papers, this should be quite a nice improvement. Also check out Ike Chuang’s program QUASM.

The Second Attitude

“If it’s not on the web, then it does not exist!”
Yesterday I went to the library for the first time in a long time. I had forgotten how interesting it can be to browse the shelves. I picked up a copy of Roger Penrose’s thesis “An Analysis of the Structure of Space-Time” (1969?) which has, so far, been a totally fascinating read. I have vague recollections of the importance of spinors in general relativity from the class I took from Kip Thorne, but at the time it hadn’t really occured that this could be more than a nice mathematical trick. Penrose really drives home how the employment of spinors, rather than tensors, for describing general relativity might be a more appropriate representation of space-time.
Also, in his introduction Penrose describes what is my favorite path towards reconciling quantum theory and general relativity:

The second attitude would be that quantum mechanics and general relativity cannot, or at least should not, be forced together into each other’s framework…that what is required is something more in the line of a “grand synthesis,” i.e. a new theory in which general relativity and quantum theory would emerge as different limiting cases, each applicable to its appropriate domain of phenomena, and in which, hopefully, semi-philosophical quantum mechanical questions as the meaning of an “observation” might be resolved. In fact, this…point of view is the one to which I would, myself more readily incline. But it is, for the present, possibly something of the lazy man’s way out, since it provides the relativist with an excuse for not tackling directly the substantial problems of quantization!

In physics, history has shown us many examples of theories whose validity in certain regimes breaks down when the theory is moved into a new regime. Sometimes the answer to resolving this is revolutionary (Why doesn’t an electron in orbit around an atom radiate away all it’s energy? The Bohr atom and then quantum theory!) and sometimes it is not as revolutionary (How do we explain the weak force? Fermi’s theory seems fairly good but it is not renormalizable. Do we need to talk about nonrenormalizable theories? No Glashow-Weinberg-Salam theory is renormalizable! We just had the wrong theory!) What astonishes me about the theoretical physics community is just how much is invested in the nonrevolutionary point of view: that it should be possible to “quantize gravity” (either string theory or loop quantum gravity.) There are only a few crazies (t’Hooft and Penrose, for example) who seem to be persuing Penrose’s “second attitude.” Part of the reason for this is dictated by the success of the traditional program: we’ve bagged electrondynamics, the weak force, and the strong force. Since in all of these cases we successfully quantized a classical theories, it seems reasonable to suggest that the “final” classical theory, gravity, should also fall to the quantization gods. But historical success does not the future guarantee! And so I will joyously spend too much of my time dreaming up ways to derive quantum theory and general relativity in the respective domains!

What We Are

We are the hollow men
We are the stuffed men
Leaning together
Headpiece filled with straw. Alas!

from T.S. Elliot’s The Hollow Men

What is a Qubit?

A question which I spend way to much time thinking about is what, exactly, is a qubit? Sounds kind of silly, doesn’t it. So let me explain.
A qubit, of course, is the most basic unit of quantum information. It is a two dimensional quantum system. Pure states for a qubit are superpositions of |0> and |1>: a|0>+b|1>. Mixed states are two dimension positive hermitian matrices with unit trace. In older times a qubit was known as a two level system or a pseudo spin one-half system. Now we just say qubit. It’s shorter and cleaner and reminds us of the Bible.
The first thing we learn about qubits, is that a qubit is different than a bit. A bit is either 0 or 1. Sometimes we can prepare a qubit such that we always get a 0 or a 1 when we perform a two outcome measurement on the qubit, but not always: when we prepare the same state there are generically probabilities of either 0 or 1. So then we think, well maybe a qubit is just a probabilistic bit, like something you see when studying information theory?
But this starts to fall apart pretty quickly. Why? Suppose you start a quantum system in the |0> state. Now if you apply the Hadamard operation to this quantum state, and measure the system in the |0>, |1> basis, you get 50% chance of outcome 0 and a 50% chance of outcome 1. Apply the Hadamard again and when you measure you always get 0. The first strange thing about this is that you have done the same thing (the Hadamard) to the state and gone from 100% 0 to 50% 0, 50% 1 to 100% 0. If you think about the qubit as a probabilistic bit, then you would like to map the Hadamard operations to the same Markov process on the probabilistic bit. Clearly this can’t be done. OK, so that’s a bit strange. So it’s not just a probabilistic bit if the Hadamards we apply are indeed the same Markov operaiton on the probabilistic bit. Now if you remember something about quantum theory, you remember that a measurement disturbes the state. So when we try to perform the measurement to follow the above experiment, we get something different. Indeed, we get 100% 0 to 50% 0, 50% 1 to 50% 0, 50% 1. And this is indeed can be achieved by a Markov chain. So if we accept that things done between measurements are what we care about then Hadamard is different from Hadamard Hadamard, and we are doing fine. But this is really strange. The system can’t always be a probabilistic bit during the entire course of the process. How do we know that a future Hadamard is going to preformed or a measurement is going to be preformed? Perhaps there is a cosmic conspiracy such that we can’t perform such and operation, but I hate to put God in initial conditions.
Well, so far we know that a qubit is not a bit (though it can act like one sometimes) nor is it a probabilistic bit (though it can act like one one sometimes), so what is a qubit?
At this point it is nice to mention the usual way people get around the whole question of what a qubit is. They say loudly “shut up and calculate.” In this view, it doesn’t matter that the system is not a probabilistic bit for the entire evolution: we just use quantum theory to take the whole process and calculate the probabilities of outcomes. A spin off of this view, is that quantum states are epistemic: they represent our knowledge about the system. So a qubit a|0>+b|1> is thus a way of representing our knowledge. This clearly has a large component of truth in it and indeed it should be seriously considered that this is a totally consistent position. But it makes me nervous.
Indeed this view is really like a third view of a qubit: that it is a Bloch vector. We can take a pure state a|0>+b|1> and say that the qubit is the two complex numbers a and b. This view says that a qubit is the quantum state. But now there is something strange about the emergence of probabilities. We say a qubit is a and b, operations manipulate a and b, but now when we make a measurement we never get a or b but only 0 or 1 (projective measurements here silly objectionist) and with probabilites which depend on a and b in a particular way. So a qubit, which was just two determined numbers changes gives rise to probabilities. Why the probabilities? Of course, you can always respond “that’s just the way it is!” and this is fine, but it feels a bit empty.
To understand why I consider this solution a bit empty, I will have to confess something shameful. The question I’m asking, “what is a qubit?” is really motivated by the fact that I am a devoted realist. To be a realist as a physicsist is a bit like saying you’re skeptical of Paul’s teachings in the Catholic church. Quantum theory may be a theory of knowledge about the system, but the question I want to understand is knowledge about what? So when I think about a qubit as the numbers a and b, I get disturbed: if these are the real properties why do we get out the probabilities. Now what bugs me here is not the nondeterministic nature of quantum theory, I don’t care a rats ass about the laws being probabilistic. But what does bother me is that if we say a qubit is these deterministic numbers a and b, why do we get probabilities. You haven’t really answered my question of what is a qubit. Just in passing, to bolster my ego and raise my crankpot level, I will note that Einstein had very similar views. Despite the God and the dice comment, it seems clear historically that the nondeterminism of quantum theory didn’t bother Einstein (note that I am avoiding talking about what did bother him!)
So what is a qubit? I think the answer to this should have a few properties. What I’m looking for is a way to describe a qubit such that the probabilities of measurement outcomes arise due to our ignorance about the realistic description when we measure the qubit. I want to think about a physical process or a computation or a realistic description which, when we describe measurement on this system, we intrinsically cannot gain access to all of the information in the description and this lack of knowledge gives rise to exactly the probabilities. Fundamentally, I would like a combinatorial understanding of where the probabilites in quantum theory come from.
Now, of course, you might say this is all really really silly because you know all about Bell’s theorem. Bell’s theorem states that there is no local realistic description of quantum theory. What this means is that if I find a realistic description of a qubit then I will have to make a nonlocal theory to explain two qubits. Nonlocality is heresy in Church Physics, so we are supposed to accept that thinking about realistic description is wrong. But I don’t give a rat’s ass about nonlocality. If you don’t signal, what’s the big deal? There are no inconsistancies, no time travel paradoxes, etc. Quantum theory is just one such nonlocal no signaling theory: surely there are others!
But now the strange thing happens. If I really do find a realistic description of a qubit and understand how the ignorance must give rise to probabilities, what happens when I consider two qubits? The realistic description must be nonlocal: it should consist of entities which are not at point A and at point B but instead are in some sense at both! And now we have another mystery: if the realistic entities are at both points, why can’t we signal between these points. So a realistic description must show that the measurement, ignorance, and probabilities done at the different points cannot manipulate the realistic description to send signals.
My current favorite way out of the nonlocality problem is definitely a strange solution: it is to get rid of the notion of spacetime. Spacetime, I believe, will be an emergent property of a combinatorial object which doesn’t have an inherent notion of spacetime. There will still be a notion of causally local, but this will not necessarily correspond to the notion of locality in the emergent spacetime. So this is why I want to understand what exactly is a qubit? I am looking for a realistic description which is compatible with the combinatorial object giving rise to spacetime.
So what is a qubit? Beats me, but looking for it sure beats having to do my laundry.

New Car Go Far

Goodbye QUBITS version 1.0
and hello QUBITS version 2.0
I will now do that sad thing where I mimic becoming a father, but for a car, and with ownership to boot: I am now the proud owner of a Subaru Impreza Outback Sport. That’s right, no more excuse for why I can’t drive cus I can only take one passenger. Bah!