Sometimes there are conflicts that run deeply in side of me because of my original training as a physicist. One thing I have never understood in classical computer science is why the locality and three dimensionality of our world don’t come into play in the theory of computational complexity. I mean sure, I can understand how you would like to divorce the study of the complexity of algorithms from the underlying medium. Sure I can also understand that those who study architectures spend copious amounts of time dealing with exactly how resources scale due to the constraints of locality and dimensionality. But isn’t it true that a true theory of the complexity of information processing should, at it’s most fundamental level, make reference to the dimensionality and connective of the space in which the computer is built? Perhaps the complexity of cellular automata gets some way towards this goal, but somehow it doesn’t feel like it goes all of the way. Most importantly the usual conditions of uniformity of the cellular automata seem to me to be overly restrictive of a theory of computational complexity which doesn’t ignore issues of dimensionality and locality.
Another interesting spinoff of this line of reasoning is to think about how computation changes when the topology of spacetime is not trivial and even when the topology of spacetime is itself something which can change with time. What is the power of a computer which can change the very notion of the topology of the underlying circuit connectivity?
– in a related paper we discuss adaptive quantum networks based upon superposition of differing network topologies …
Actually, to a certain extent, this is taken into account. One thing that is often overlooked when citing computational complexity is that the particular polynomial (and sometimes even class) is true only with respect to a particular architecture.
The two most commonly cited architectures, are, of course, a random-access machine (RAM), equivalent to modern, real computers, and the single-tape Turing machine.
Any statement of computational complexity, any engineer knows, is valid only for a particular range of problem size in the real world. You run out of RAM, or your RAM grows so large that addressing is no longer unit time but log time instead, or you get to the point where your RAM cells are so far apart that speed-of-light/signal propagation considerations become important. Circuit engineers in particular deal with this all the time.
In quantum computing, some of this pops up earlier than in classical computing, since many technologies force us to deal in nearest-neighbor-only fashion. To be immodest for a moment, we and a few others are working on this; see quant-ph/0408006 (under review at PRA) and the links from my Aqua project page. We also have some results on 2D and 3D layouts that we haven’t written up yet; they’re pretty much what you’d expect.
P.S. this MathWorld page says two-dimensional Turing machines are called turmites, ants, or vants on a square grid, or bees, turtles, or worms on a hexagonal grid.
Googling for three dimensional or multidimensional Turing machines turns up a few things; since they are polynomially equivalent to the single-tape machine, they aren’t of much interest to theoreticians, apparently.
Hey Rod,
I didn’t mean to imply that people in architecture haven’t thought about these problems to a huge degree, but what I’m most interested in is why dimensionality hasn’t been essential in theoreticial constructions. I guess I find it sort of miraculous that all of these computational systems are related to each other by polynomial overheads. Can anyone come up with some strange properties of space which allow one to break out of this paradigm of polynomial overheads between such a model and a Turing machine?
Hi Rod
– there are, of course, a few examples of more radical spacetime topologies in the literature. 😉
Dave: this point — that the theory of computing ought to take into account the dimensionality of spacetime — was **PRECISELY** my motivation for thinking about quantum search of spatial regions. And indeed, there you really do see a difference between 2 and 3 dimensions, although it’s only a log factor. Another good question is whether anything you can do in O(n) time on a 2D Turing machine tape, you can also do in O(n) time on a 1D tape. (I conjecture the answer is no — but maybe this is known.) As for topology, see the recent paper “Topology Inside NC1” by Allender et al.
Hey Scott,
I guess what I’m wondering is how to make dimensionality a “fundamental” part of computational complexity. All complexity problems can be constrained to a particular allowed geometry for the system which is solving the problem, but is there some way to make this implicit from the very beginning?
Thanks for the reference on planar circuits.
I only just discovered this thread (when googling for the allender topology paper). Adding to what Scott says, it is possible that there is some role for dimensionality in classical (or quantum) complexity theory, but that the role is small (i.e the slowdown/speedup only changes things at the P-time level rather than at higher levels).