Over at Information Processing, Steve Hsu has a post about a recent paper (with coauthors R. Buniy and A. Zee), hep-th/0606062: “Discreteness and the origin of probability in quantum mechanics” which is probably of interest to quant-ph exclusive readers (it was cross posted to quant-ph today.)

The basic idea of the paper is as follows. In the many-world’s interpretation of quantum theory a major challenge is to derive Born’s probability rule (which, of course, Born got wrong the first time!) One way that this can be achieved (well, sort of!) is as follows. Consider a world in which we perpare a particular quantum state and measure it in a particular basis. Now repeat this process for the same state and the same measurement. Do this an infinite number of times (more on this later…you know how I hate infinity.) Born’s probability rule predicts that the probability of a particular sequence of measurement outcomes is given by [tex]$|langle s_1|psirangle langle s_2|psirangle cdots|^2$[/tex]. In the limit of an infinite number of measurements, the typical fractions of the different outcomes dominates this expression (i.e. the terms which recover Born’s rule.) In other words, the sequences with fractions which dont’ satisfy Born’s rule have vanishing norm. So what do you do? Suppose you simply exclude these “maverick” worlds from the theory. In other words you exclude from the accessible states those with vanishing norm. (There are a bunch of things which bother me about this derivation, but lets just go with it!)

Now the problem addressed in the paper is that of course the limit of an infinite number of experiments is pretty crazy. And if you cut off the number of experiments, then you run into the problem that the maverik worlds have small, but non-zero norm. So why should you exclude them? What is suggested here is that you should exclude them because the Hilbert space of quantum theory isn’t quite right (the authors have arguments about this discreteness coming from arguments concerning gravity.) Instead of our normal Hilbert space, the argument is that a discrete set of vectors from the Hilbert space are all that are physically possible. One picture you might have is that of a bunch of nearly equally spaced vectors distributed over the Hilbert space and unitary evolution for a time T is followed by a “snapping” to the nearest of these vectors. How does this discretized Hilbert space (isn’t calling it a discrete Hilbert space an oxymoron?) fix the problem of a finite number of measurements? Well you now have a minimal norm on your Hilbert space. States which are two close together appear as one. In other words you can exclude states which have norms which are too small. So, if you put some discretation on your Hilbert space you will recover, almost, Born’s rule in the same manner as the infinite limit arguement worked. An interesting argument, no?

One interesting question I’ve pondered before is how to make a discretized version of Hilbert space. Take, for example a qubit. Now you can, as was suggested above, just choose a finite set of vectors and then proceed. But what if you want to avoid the “snapping” to this grid of vectors? Well you might then think that another way to proceed is to have a finite set of vectors and then to allow unitary transforms only between these vectors. But when you do this, what unitary evolution you can apply depends on the vector you are applying it to. Nothing particularly wrong about that, but seems like it might lead to a form of quantum theory with some strange nonlinearities which might allow you to solve hard computational problems (and thus send Scott Aaronson’s mind spinning!) So what if you don’t want to mess with this linear structure. Well then you’re going to have to require that the discrete set of vectors are transformed into each other by representations of some finite group. Does this really limit us?

Well it certainly does! Take for instance, our good friend the qubit. An interesting question to ask is what are the finite subgroups of the the special unitary transforms on a qubit, SU(2). It turns out that there aren’t very many of these finite subgroups. One finite subgroup is simply the cyclic group of order n. Since we are dealing with SU(2) and not SO(3), this is just the set of rotations in SU(2) about a fixed axis by angle [tex]$frac{n}{4 pi}$[/tex]. Another subgoup is the dihedral group of order n which is just like the cyclic group, but now you add an inversion which corresponds to flipping the equator where the cyclic group states are cycling. Finally there are the binary tetrahedral group, the binary octahedral group, and the binary icosahedral group which are SU(2) versions of the symmetries of the correspondingly named platonic solids. And thats it! Those are all the finite subgroups of SU(2)! (There is a very cool relationship between these subgroups and ADE Dynkin diagrams. For more info see John Baez’s week 230) So if we require that we only have a discrete set of states and that the group structure of unitary operations be maintained, this limits us in ways that are very drastic (i.e. that conflict with existing experiments.)

So if one is going to take a discrete set of vectors and use only them for quantum theory, it seems that you have to modify the theory in a way which doesn’t preserve the unitary evolution. Can one do this without drastic consequences? I think that is an interesting question.

In Lucien Hardy’s axiomatic approach to quantum mechanics, continuity plays a central role:

“Axiom 5 Continuity. There exists a continuous reversible transformation on a system between any two pure states of that system.”

I wonder if he could get away with a finite set of vectors.

Like or Dislike: 0 0

One picture you might have is that of a bunch of nearly equally spaced vectors distributed over the Hilbert space and unitary evolution for a time T is followed by a “snapping” to the nearest of these vectors.Yuck! Yuck! Yuck! Can anyone take that seriously as a picture of physical reality?

Like or Dislike: 0 1

Sure, why not? Maybe there is an elegant structure there just waiting to be discovered!

Like or Dislike: 1 0

Dave,

in your picture you would introduce a ‘new layer’ of randomness. To which grid point would the vector snap to [e.g. if you are (exactly) in the middle of two) ?

Like or Dislike: 0 0

Well its not my picture, it’s the authors of the paper’s picture! I think you’d want to determine a deterministic snapping, so that the Born rule argument has legs.

Like or Dislike: 0 0

We don’t advocate “snap-to” for any aesthetic reason — it is just one possible realization of discreteness. The main point is that quantum gravity may enforce a fuzziness on quantum state space, as it does on spacetime. Why would you insist on a continuum for state space when all indications (strings, loop gravity, etc.) are that spacetime itself is not continuous? And, no experiment will ever exclude the possibility of some level of discreteness to state space.

You necessarily get a small amount of nonlinearity, but not any more than when you approximate Schrodinger’s equation in order to solve it on a digital computer — numerical rounding of the wavefunction is just “snap-to”. If the discreteness is at the 10^{-100} level it is probably not directly detectable, but we argue it may affect the Born rule.

The point we’d like to emphasize is that there does not seem to be a satisfactory derivation of the Born rule in many worlds without modifying QM in some way. You may not believe in many worlds (I’m not entirely sure I do), but it is one of the more common interpretations of QM (embraced by Hawking, Feynman, Gell-Mann, Weinberg, etc.), so to find that it doesn’t reproduce the Born rule is disturbing!

Like or Dislike: 2 0

Hmm…. It seems that there is nothing particularly quantum about the problem this paper is trying to address – the same problem exists for any attempted frequentist derivation of probability, whether classical or quantum. We could just as well say that all sequences with probability Facts, Values and Quanta by Marcus Appleby to have your mind changed.

In my view, Wallace did a pretty good job of justifying the Born rule in many-worlds without modifying quantum theory. You may not agree with all his premises, but I don’t think one can hope for anything much better than this in the many-worlds context.

Like or Dislike: 0 0

My last comment got mangled. It should read:

Hmmâ€¦. It seems that there is nothing particularly quantum about the problem this paper is trying to address – the same problem exists for any attempted frequentist derivation of probability, whether classical or quantum. We could just as well say that all sequences with probability Facts, Values and Quanta by Marcus Appleby to have your mind changed.

In my view, Wallace did a pretty good job of justifying the Born rule in many-worlds without modifying quantum theory. You may not agree with all his premises, but I donâ€™t think one can hope for anything much better than this in the many-worlds context.

Like or Dislike: 0 0

Aargh, it happened again. OK, you’re just going to have to read about it on my blog when I get around to writing a post.

Like or Dislike: 0 0

Matt, I could not agree with you more… and so did some guy named Kolmogorov. Frequentists always run into trouble when they try to make atypical events not occur almost surely. 🙂 So, I agree that probability measures should be postulated. However as I understand it, that’s not the issue being raised. The question at bar is how to get mathematically rigorous quantum probabilities without raising the observer to a different level than the rest of the universe (and without requiring a continuous number of different universes, which for some reason would be more crazy than a countably large number of worlds! what?!). I don’t think Wallace satisfies the “mathematically rigorous” part.

As much as I hate the many worlds interpretation, it does more or less treat the observer and the observed according to the same set of physical laws. This aspect is otherwise lacking in Bayesian reasoning. Interestingly enough, classical Kolmogorov probability has a disturbing similarity to this quantum proposal since eventhough a classical probability space can be continuous, the measurable event space must be countable (or sigma additive, I guess). So, maybe there ARE a continuous number of different worlds, but the quantum measurable ones (whatever that means) are only a sigma-algebra generated by that space…. the Bohrworld algebra (pronounced Borel-d) if you will… good lord.

Like or Dislike: 1 0

There’s some really interesting control theoretic work by Marsden and Littlejohn where they take Lagrange mechanics from a continuous to a discrete space in a manner such that variation produces valid equations of motion on the discrete space while preserving the symmetries on the continuous space. I wonder what would happen if that idea was turned loose on a quantum Lagrangian.

Like or Dislike: 0 0

OK, a longer and corrected version of my comment can now be found here.

Like or Dislike: 0 0

Matt,

The problem is not with strict frequentism. If we had a very long (but not infinite) sequence of outcomes that supported the Born rule, most physicists would accept it. I know this is true because that is the current state of affairs on our branch of the wavefunction! 🙂

However, if you take the usual MW logic with only a finite number of outcomes, you then have to explain why we (the ones having this conversation) are not at all typical of physicists on all the branches. There is a very tricky discarding of zero norm states in the infinite limit that cannot be justified *at all* in the finite case (see Hartle or DeWitt). That leaves the *vast majority* of observers with data sets (maverick worlds) that don’t match the Born rule at all. Why are we among the tiny subset living on a non-maverick branch?

To be honest I don’t find Wallace’s argument convincing, and have never heard it endorsed by physicists that I know. I don’t see how it addresses the points made above, which boil down to “why don’t we live on a maverick branch? They are so much more numerous!” The original Everett argument is that as N goes to infinity the maverick branches have zero norm, so cannot be physical states. They are discarded on that basis (whether you like that or not is another question — Dave seems to find even that reasoning kind of suspect). This argument fails for any finite N, no matter how large, since the maverick worlds will all have small but non-zero norm and are presumably populated by people just like you and me 🙂

Like or Dislike: 1 0

Steve, I think the problem is exactly the same with strict frequentism. There one would like to identify the probability with the infinite limit of the relative frequency. However, we can only say that the relative frequency converges “in probability” to the to the “true” probability. In other words, we have to presuppose some notion of probability in order to derive the meaning of probability, which is viciously circular.

In MW, we would like to identify the probability of a branch of the wavefunction with the norm squared of that branch. We can show that the norm of the worlds not conforming to the Born rule converges to zero, but we need to presuppose the idea that this represents a small probability in order for this to be a meaningful statement.

You can see that the arguments have the same structure, so I think the problem is with frequentism rather than with quantum mechanics.

Like or Dislike: 0 0

Matt,

You wrote

“In MW, we would like to identify the probability of a branch of the wavefunction with the norm squared of that branch.”

That isn’t really the argument originally described by DeWitt (it is not clear what Everett meant, since he is very brief on this point, but he did discuss it in depth with DeWitt, so we might assume that DeWitt’s argument is faithful).

The argument of DeWitt (and Hartle) does not try to identify the probability of a branch of the wavefunction with the norm squared of that branch (that would be circular). It just says that we will, in the infinite limit, exclude all branches with *exactly* zero norm. If you accept that physical states should not have exactly zero norm, this is a reasonable assumption. Then, they show that every branch that remains (again, for infinite outcomes) will satisfy every possible statistical test of the Born rule (the others are called maverick worlds). So, we could be on *any typical branch* of the remaining non-maverick ones, and we would deduce the Born rule from our observations.

*In particular, once you throw out the zero norm states the “measure” on branches does not have to have anything to do with norm — in fact, you don’t need a measure*

This is because, once the zero norm mavericks are gone, you will deduce the Born rule on any of the remaining branches.

I think this argument contains more than the usual frequentist ideas. It depends on all maverick worlds being zero norm states, and that zero norm states are unphysical.

When you run the argument again for finite N you get a completely different conclusion — there is no reason to throw out maverick worlds, unless you make some association between norm and probability, but that is of course circular. The original Everett-DeWitt-Hartle argument is *not* circular, but does depend crucially on the zero norm technicality.

Sorry to go on so long, hope that made sense 🙂

Like or Dislike: 1 0

I’m not objecting to the fact that you have zero norm in the infinite limit. In the case of frequentist derivations of probability you also have zero probability in the infinite limit. Of course, whether you should identify either of these with impossibility, even in the limit, is a seperate topic of debate.

In finite, but large, cases you have nonzero but small probability on the “maverick” cases in the frequentist justification of probability and nonzero but small norm on the “maverick” branches in the Everett-DeWitt-Hartle argument. In both cases you are actually trying to justify the meaning of probability, so you can’t just throw these out by virtue of the fact that they are “small”.

Although the Everett-DeWitt-Hartle argument is not strictly circular, since you do not need to invoke the notion of probability in the argument, the structure of the difficulty is almost identical, which is why I think that it has more to do with frequentism than with QM.

Like or Dislike: 0 0

I don’t think Wallace’s argument completely solves the problem. It stands or falls on whether you accept his assumptions, and you can reasonably question them. However, I do think it is the best we are likely to get within the MW framework without modifying QM.

I believe Caves, Fuchs + Shack would argue that there is no problem with introducing the “wavefunction of the universe”. For them, the wavefunction is not a state of reality, so the “wavefunction of the early universe” simply represents your beliefs now about what the universe was like then. It depends on the existence of an agent only to the same extent that a probability distribution does. It is completely analogous to introducing a probability distribution over possible universes in the classical case, which I assume you would agree is unproblematic.

My central problem with your argument is that it would entail a modification of classical probability theory. Ususally, we think that in any finite number of trials, however large, there is a small probability that the relative frequency disagrees with the “true” probability by an arbitrary amount (I am using objectivist language here for the sake of argument). You are saying that for large enough finite N, the relative frequency agrees with the probability exactly (or at least almost exactly). Disregarding any claims about quantum gravity, it is a procustean argument that cuts off the small tails of the probability distribution for no other reason than that we don’t like them.

Like or Dislike: 0 0

Dave,

I really don’t think the argument from quantum gravity is compelling. It amounts to a denial of the question by arguing that it won’t exist in a future theory that we don’t know much about yet. You could solve every problem in the foundations of any theory by making such a move. There is a long history of such moves, most of which have contribured little, e.g. arguing that the derivation of the second law of thermodynamics depends on details of the “true” interpretation of quantum mechanics instead of trying to derive it in the classical case first. Both of these are cases of confounding two problems that should be addressed independently if at all possible.

Besides, there are many people (string theorists and loop quantum gravitists in particular) who do not believe quantum gravity requires this discreteness of Hilbert space. Their theories may very well be wrong, but in any case who is to say that the successful theory will even have a formulation in terms of Hilbert space at all, or reduce to anything remotely resembling the state space that Buniy, Hsu and Zee use? Personally, I suspect that if the discreteness argument is right then we will have to move to different mathematical structures entirely because it is almost impossible to implement physical symmetry groups on a discretized Hilbert space in any natural way.

Like or Dislike: 0 1

Steve,

It doesn’t seem to me that the MW framework does any better. You still can’t ask what properties the early universe has, you just get a range of possibilities in the different brances. It seems to me that the only reason why quantum cosmologists like MW is that they take the idea that a quantum state represents a state of reality far too seriously.

Like or Dislike: 0 1

Well, I don’t think that Steve and I are going to reach any convergence of our opinions. We should probably give Dave’s readers a break and continue the discussion some other time, some other place.

Like or Dislike: 0 0

In finite, but large, cases you have nonzero but small probability on the â€œmaverickâ€ cases in the frequentist justification of probability and nonzero but small norm on the â€œmaverickâ€ branches in the Everett-DeWitt-Hartle argument. In both cases you are actually trying to justify the meaning of probability, so you canâ€™t just throw these out by virtue of the fact that they are â€œsmallâ€.If you are simply throwing them out to justify the notion of probability or of Born’s rule you are in trouble (the argument seems circular.) But it seems to me that the argument in Steve’s paper is (slightly) different. The reason for “throwing out” the small norm states is not ad hoc, but is justified from physical considerations (i.e. gravity puts a limit on how finely you can distinguish different quantum states.) Another way to say it is that if quantum theory and gravity are to coexist, then quantum theory needs modification, and this modification leads to a new theory which adds some fuziness to Hilbert space. I put this under the “modify quantum theory” category. Of course, in hindsight this looks ad hoc.

Personally I’m willing to accept it until I can show that it leads to some physical effect which contradicts experiment. I guess this is why I am so neutral on interpretations (okay, not really, but let’s pretend!) The modification Steve and coauthors propose allows me to behave as if everything I’ve been doing under a different interpretation is fine. So I can convert to his side or keep what I have. But what is interesting here is that the effect postulated by the authors should have physical consequences. Of course, what the authors argue in the paper is that these consequences are beyond our current (or nearly ever) experimental reach. I actually find this a weakness of the paper.

Boy I’m sure going to get it from the foundations crowd for that last paragraph, aren’t I?

Like or Dislike: 1 0

Matt,

I agree with your last comment — both frequentist probability and standard MW arguments have trouble making the leap from infinity to finite number of trials.

We are making a highly speculative leap in proposing fuzziness of quantum state space, which might fix the problem.

It would be nice if the problem fixed itself (without modifying qm), but I don’t think it can within many worlds if N is finite. Do you agree with that, or do you still think Wallace’s arguments solve the “why aren’t we on a maverick branch?” problems?

I find the Bayesian interpretations a la Caves, Fuchs, et al. interesting, but I don’t see how they help with, e.g., quantum cosmology. What are the rules by which the very, very early universe evolves before there are any macroscopic observers?

Like or Dislike: 1 0

Matt,

It’s possible that I don’t understand Caves et al. that well, but I couldn’t figure out how you would do quantum cosmology without an observer in their framework. We can ask how likely it is given what a particular observer sees that the early universe had a particular property, but this is short of describing the dynamics of the early universe in the absence of an observer. (Perhaps the latter is too much to ask, they would say, but that makes their formulation too observer-centric to me — at least I could hope for more from the fundamental theory of physics.)

Regarding our paper, it is true that the cutoff on maverick worlds is put in by hand (esp. if you don’t like the gravity motivation). However, for a large range of (very small) values of discreteness you recover the Born rule. Note there are small deviations from the Born rule predicted by the model, related to the value of the discreteness parameter, although probably not measurable by practical means.

Like or Dislike: 0 0

PS Matt, I forgot to mention that in the non-maverick branches remaining after the cut-off there can be very long runs of atypical outcomes (just not long compared to the total length N; but observers only see small subsets anyway). So, we aren’t really disagreeing with classical probability theory in the way you describe below (assuming I understand your comment correctly).

“Ususally, we think that in any finite number of trials, however large, there is a small probability that the relative frequency disagrees with the â€œtrueâ€ probability by an arbitrary amount (I am using objectivist language here for the sake of argument).”

Like or Dislike: 1 0

Actually I kind of enjoy it! Of course you two may eventually wear out of stamina.

Like or Dislike: 0 0

Matt,

Thanks for the discussion — I actually enjoyed it 🙂

Steve

Like or Dislike: 0 0

The paper says “any (maverick!) components of Psi with a norm less than sqrt(N)epsilon can be removed from the wavefunction.”

So the authors are picking a basis for the wavefunctions and ‘projecting out’ parts with small coefficients. But this depends on the choice of basis. Who chooses the basis?

Like or Dislike: 0 0