From a crazy model to a concrete question: is there a nice mathematical structure hidden here?
Once upon a time I wrote a crazy paper (arXiv:quant-ph/03091189) on quantum computation in the presence of closed time-like curves. In this model, one identifies two types of quantum systems: those that are “chronology respecting” and those that traverse “closed time-like curves.” These two types of quantum systems can interact amongst themselves and also between each other. A typical setup is as in the figure at the right, where n chronology respecting qubits interact with l closed time-like qubits. One then makes two assumptions: (1) that the CTC qubits are “initially” unentangled from the non-CTC qubits and (2) that the evolution is consistent (no grandfather paradox.) These two imply the condition where is the density matrix for the CTC qubits, and is the density matrix input of the chronology respecting qubits. This is Deutsh’s model of quantum computation in the presence of closed time-like curves. Deutsch showed that the above equation always has at least one solution: in other words you can formulate a theory in which the grandfather paradox never appears. But if you think a little bit about it, the above machinery, while consistent, is seen to be non-linear. Since the CTC input to the unitary depends on the output of the unitary, the non-CTC qubits will see a change which is nonlinear. Since nonlinearity in quantum computing gives us a lot of power (Abrahms and Lloyd: arXiv:quant-ph/9801041), you might guess that this model packs quite the computational punch.
And indeed, recently Aaronson and Watrous have shown that if you give a classical or quantum computer access to closed time-like curves, you end up with a model which has the computational power of PSPACE (arXiv:0808.2669) Wait: classical model? Doesn’t th e classical model lead to paradoxes? Well if you insist on deterministic solutions then yes: if I take a single CTC bit and have a bit flip operation act on it, then there is no consistent solution. But if I loosen this restriction and allow solutions with probabilities then there is a consistent solution: if the bit is fifty percent 0 and fifty percent 1, then there is no paradox. In general, then one can imagine a classical model in which the U in the figure above is replaced by a stochastic map and on insists on consistency of a probability distribution.
Now there are many ways to sell Aaronson and Watrous’ result. They like to sell it as “in the presence of time travel quantum computation and classical computation are equivalent.” But one way I like to think about this result is it gives us the first model of classical information theory which can possibly, with an appropriate restriction, yield quantum theory. Whah? Well this random thought emerges from a sort of random thought process: the class of problems efficiently solvable on a quantum computer is called BQP. The relation of this complexity class to other complexity classes isn’t well established (for example it isn’t known if BQP lies in the polynomial hierarchy.) But one of the oldest known results is the BQP is in PSPACE. The best improvement we have on this is probably that BQP is in PP (or a slightly more complicated class), but lets just fixate on the BQP in PSPACE result. Because of the result of Aaronson and Watrous, this means that there is an efficient “simulation” of quantum computers using classical computers with closed time like curves. In other words, if we allow for classical models with closed time-like curves, we arrive at a model where simulating quantum theory wouldn’t lead to any direct conflicts with complexity theory.
Now of course, it is important the classical computation with closed time-like curves is more powerful than quantum computers (unless BQP=PSPACE), so this really isn’t telling us too much. Except I would argue that it motivates us to examine classical theories in which strange causality mucking phenomenon are used to “derive” quantum theory.
In thinking about these subjects over the last few years, I’ve toyed with a ton of different models which give rise to some interesting problems. Among these are a set of questions about the structure of fixed points under composition. Let be a stochastic matrix. “Stochastic” means that and that , i.e that this is the transition matrix for some probabilistic transform on a d dimensional space. Let be another stochastic matrix. Now at least one fixed point, i.e. a probability distribution such that (this follows from the Perron-Frobenius Theorem). The set of fixed points will be a convex set. Similarly has at least one fixed point and will have a convex set of fixed-points. Call the first set of fixed points and the second set of fixed points .
Now the general question that I keep running up against when thinking about these crazy classical models is: is there a nice mathematical structure for discussing how and are related to the fixed points of the composed stochastic matrix ? This feels like something that must be well understood, but my cursory scanning of literature hasn’t turned up anything too interesting. Does anyone know of any set of tools or results which can be used to talk about this structure of fixed points under composition? Special cases where the matrices are not stochastic, but are doubly stochastic or are permutation matrices are also probably of interest in dreaming up these strange models.
Dave,
The fact that the CTC qubit’s state is dependent on the chronology-respecting qubit’s state is only one interpretation. You could just as easily say that this apparent dependency is a condition for which types of states may be coupled.
In fact, I have recently found that the most generalized version of Deutsch’s condition is probably not restrictive enough since the underlying behavior is apparent in many seemingly normal instances.
Patrick Hayden had asked me to finish up a proof that CTC qubits and chronology-respecting qubits could not be entangled and instead I discovered that their ability to be entangled depends upon the unitary transformation and, in some cases, they *may* be entangled.
I’ll send you the file (let me know if you want my Mathematica notebooks too – the algebra for the tripartite case is nasty).
R has at least one fixed point. If it’s unique, there need be no relationship between fixed points of P and R. (Q can project to a single vector, which becomes the unique fixed point of R.)
If R has N > 1 fixed points, then things get more interesting. The fixed points are closed under linear combination, so they’re a subspace (I’m actually assuming N is the dimension of the subspace). An N-dimensional fixed subspace gives an N-symbol noiseless code for N (not necessarily obvious, but see arxiv/0705.4282), and therefore an N-symbol correctable code for P. Q is the recovery map.
So, the dimensionality of R‘s fixed-point space (N) is tightly bounded by the size of P‘s largest zero-error code, and the fixed-point set itself has to be a subspace of one of those codes. You can also transpose R and get an identical bound in terms of QT‘s zero-error codes. (Yes, I know QT isn’t necessarily stochastic, but it works anyway).
The zero-error codes are independent sets of P‘s adjacency graph, so (a) there can be quite a few of them, and (b) finding the bound on N is isomorphic to Maximum Clique.
So you could say that quantum mechanics and general relativity are computationally equivalent?
Essentially. What I find interesting, though, is that the outline of Aaronson & Watrous’ result goes something like this: something from GR (a CTC) is used to make QM and GR computationally equivalent. Since most people think CTCs don’t exist, perhaps this at least gives us a blueprint for how we might reconcile QM and GR in general (or, perhaps, it even hints they can’t be reconciled, but I doubt anyone buys that).
Something like the quantum horn conjecture will probably be useful.
[1] http://www.ams.org/notices/200102/fea-knutson.pdf
[2] http://terrytao.wordpress.com/2007/04/19/open-question-what-is-a-quantum-honeycomb/
[3] math/0303013
Oops, sorry Robin. Your comment is now posted (stupid spam filters….it’s probably your last name 🙂 )
Are you looking for a general theory with general properties?
I did a quick scan of the P-F theorem [as always, I’m not good on the in depth thinking]. Basically, your conditions guarantee that the matrix will have an eigenvalue. It is a fixed point theorem, but that kind of clogs up how to think about it.
An eigenvector is a one-dimensional invariant subspace of a matrix. The fixed-pointedness comes from the fact that the problem is constrained by the the ball. So, your question is really about how eigenvectors are affected by matrix composition.
That’s probably something like conformal theory. My guess, though, is that there is a lot to calculate on a case by case basis, and that there is sparse general information.
I don’t have anything useful to say about your question, but I was just wondering about this:
Because of the result of Aaronson and Watrous, this means that there is an efficient “simulation” of quantum computers using classical computers with closed time like curves.
So you could say that quantum mechanics and general relativity are computationally equivalent?
Sorry, my previous comment is unhelpful. Quantum horn gives you information about the eigenvalues of the product of two matrices in terms of the eigenvalues of the two factors. Here you are only interested in the multiplicity of the largest eigenvalue, namely 1, which can be deduced in a much more elementary fashion. The multiplicity tells you the dimension of the fixed point set.
If R = QP, then the subset of Fix(Q) \intersect Fix(P) is contained in Fix(R). Can Fix(R) contain anything else? The eigenvalues of a stochastic matrix are bounded in absolute value by 1, so if a candidate fixed point (aka eigenvector) is not in Fix(P), then the only way it can be in Fix(R) is that it is a common -1 eigenvalue of both P and Q.
Hey, nice formatting! I shudder to imagine reading this without typesetting – ugh.
Richard,
Unfortunately that argument doesn’t work. For one thing, stochastic matrices can have complex eigenvalues (e.g. permutations). So any unit-modulus eigenvector of P can be a fixed point of R.
It’s even worse that that, though. Consider P=
[ 1/2 1/3 0 ]
[ 1/2 1/3 0 ]
[ 0 1/3 1 ]
This has eigenvalues {1, 5/6, 0}, so it has a single fixed point. However, if Q=
[ 1 1 0 ]
[ 0 0 0 ]
[ 0 0 1 ]
then R = PQ has eigenvalues {1,1,0}.
Your intuition may have been based on the idea that because the operator norm (absolute value of largest eigenvalue) of a stochastic matrix is bounded by 1, the matrix must be contractive. It is — but only in the 1-norm, not the 2-norm. Together with the fact that stochastic matrices aren’t normal (and thus don’t have to be diagonalizable), that ruins a lot of good intuitions. 🙁
I posted a more complete answer earlier, but it’s either held up in moderation, or vanished into the ether.
Dave, this is a great topic! Let’s approach this topic from an engineering point-of-view, namely, by asking “How can we break this machine?”
For computing devices, the usual answer is, “Let’s feed it an unexpected input.” So let’s see … we feed the device a strictly positive input state … the device implements a surjective (possibly nonlinear) map onto strictly positive output states.
Let’s take it for granted that Deutsch’s causal self-consistency condition respects strict positivity (but has this ever been proved?) … in which case we ask the natural question: “Is the computational map is a continuous function of the inputs?” … and then (assuming the response is discontinuous, as seems most likely) we ask the further natural question: “What happens when we try to break closed-loop computations by supplying input states that are on the boundary of a discontinuity?”
The nice thing about quantum information science is how natural such questions are to ask, and how hard they are to answer! 🙂
Let’s take it for granted that Deutsch’s causal self-consistency condition respects strict positivity (but has this ever been proved?)
I don’t know if it has ever been proven, but in my algebraic tinkering with it it certainly seems to preserve structure in more cases than one would have originally thought.
On behalf of all us hyphenated types, I’m filing a discrimination suit against ScienceBlogs. :>
Actually, since my subsequent comment went through without trouble, I think it was the sheer volume of HTML tags in my earlier one.
I absolutely agree, Ian!
It’s very easy to loose sight of strict positivity’s central importance when the quantum dynamics are POVM’s applied sequentially, but (to paraphrase Gen. Turgidson in Dr. Stranglove) “Well, Mr. President, I would say that CCL physics has invalidated that policy!”
The CTC formalism of Deutsch preserves positivity. It’s easy to see that: the evolution is always unitary. The only strange thing is that the input is equal to the output, i.e. that the CTC qubits are in a special initial state.
For all you students, the above post said strict positivity when it should have said complete positivity. My bad!
See Nielsen and Chuang, Box 8.2, for a lucid discussion of complete positivity.
And Dave, I take your point … yes, complete positivity is assured. The question of whether the CCL computational output is a non-smooth function of the computational input remains opaque (to me). These are fun questions!
Oops. Very minor technical point re: my example above… Dave asked about QP, not PQ. QP also has eigenvalues {1,1,0}. In case you were wondering!
One final note on this extremely interesting topic … over on Shtetl Optimized, Scott Aaronson has posted the slides for his QIP talk on CCLs.
These notes give an example to show that yes, as discussed in the above posts, the CCL computational output is (in general) a non-smooth function of the computational input.
Scott’s notes go on to assert (the mathematical details are not given, but the assertion is highly plausible) that there are some PSPACE-complete unitaries U for which the computational output is a continuous function of the input.
My own take on this (from an engineering perspective) is simple:
My own interest in these phenomena is highly practical: these same questions arise naturally, not from CCLs, but from projecting the equations of linear quantum mechanics onto the curved state-spaces of Kählerian tensor network manifolds.
The point being not that Nature does this projection, but that ordinary human beings do it, within the context of (almost all) known methods for efficient simulation of quantum dynamics. So if there are inherent aspects of the resulting (simulated) quantum physics that are unexpected, we quantum systems engineers definitely want to know about it!
This is yet another respect in which even the most arcane aspects of QIT are turning out to have surprising (and important) practical applications.
The CTC formalism of Deutsch preserves positivity. It’s easy to see that: the evolution is always unitary. The only strange thing is that the input is equal to the output, i.e. that the CTC qubits are in a special initial state.
No, see, that’s my point. We think it is strange, but if you break that equation down algebraically piece by piece, we find this condition is nothing special. You really ought to take a look at the file I sent you. This is why I think Deutsch’s consistency condition is actually too weak. I can’t distinguish between a CTC qubit and a chronology-respecting qubit except in certain cases (namely if the CTC qubit acts as a control qubit in some sort of controlled operation).
Um, that last line should have read “It can’t distinguish…” not “I can’t distinguish…” although I suppose there’s an element of truth to both.
If I may be allowed to re-post more on John Sidles’s researech…
http://scienceblogs.com/catdynamics/2008/07/dog_days_of_blogging.php#c1028694
John Sidles has just made a fascinating variant of the Anthropic principal: “the quantum state-space of Nature is itself likely to be dynamically Kahlerian.” This is over on the “Quantum Computing Since Democritus Lecture 19: Time Travel” thread of Professor Scott Aaronson’s blog “Shtetl-Optimized.”
I’ve wondered thus:
In a simple sense, the speedup [by embedding quantum trajectories on a suitably-curved Kahlerian manifold, so that the state-space geometry itself implicitly handles much of the computational complexity of calculating the trajectory, and hence that the resulting codes are tremendously fast] is because of results such as the Lefschetz Theorem (that each double point assigned to an irreducible algebraic curve whose curve genus is nonnegative imposes exactly one condition) which is itself a consequence of the Kahler condition (equivalently:
existence of a Kahler metric; existence of a Kahler form, i.e. a real closed nondegenerate two-form, i.e., a symplectic form, satisfying a technical condition with the almost complex structure induced by multiplication by i; existence of a Hermitian metric where the real part is a Kahler metric, and the imaginary part is a Kahler form; or existence of a metric for which the almost complex structure is parallel).
Similarly, the Hodge decomposition of cohomology stems from the Kahler condition for compact manifolds. But exactly why, other than ‘neat Math?’, should we suspect that “the quantum state-space of Nature is itself likely to be dynamically Kahlerian?”
I like the conjecture very much, but seek more confidence in it.
Posted by: Jonathan Vos Post | August 2, 2008 6:47 PM