APS Topical Group

The American Physical Society now has a new topical group, Quantum Information, Concepts, and Computation. Here is an article about the topical group. Some quotes

“It is meant to be a broadly inclusive home for researchers whose professional lives may have kicked off in various traditional disciplines, but who nonetheless share an over-arching interest in the foundations and ‘surprising implications’ of quantum mechanics,” said Caltech’s Hideo Mabuchi, GQI’s acting chair.

and

Greenberger also feels the field needs an effective lobbying group to represent its interests to federal funding agencies, most notably the National Science Foundation. “Many young people are becoming interested in the field, but there are few opportunities for having their research funded,” he said.
Part of the problem is that quantum theory suffers from the perception that it is a field for “old men,” since the debate dates back to 1935 and the famous Einstein-Podolsky-Rosen paradox. (That paper is still the most downloaded publication from the APS journal archives, 80 years after it was written.) But Greenberger points out that it is, in fact, a vibrant exciting field at the forefront of physics, using all the latest laboratory techniques, and spinning off the newly emerging fields of quantum cryptography and quantum computing.

A Postmortem Chewing Out

Another interesting letter in “Perfectly Reasonable Deviations From The Beaten Track: The Letters Of Richard P. Feynman” by T. Ferris (forward), R.P. Feynman (of course!), and M. Feynman (editor) is the following:

Mr. Todd Pramberg
Stockholm, Sweden
Dear Sir:
The fact that I beat a drum has nothing to do with the fact that I do theoretical physics. Theoretical physics is a human endeavor, one of the higher developments of human beings-and this perpetual desire to prove that people who do it are human by showing that they do other things that a few other humans do (like playing bongo drums) is insulting to me.
I am human enough to tell you to go to hell.
Sincerely,
Richard P. Feynman

Why do I find this letter interesting? Well when I was senior at Caltech a movie about Feynman, “Infinity” staring Matthew Broderick, was released (I’ve never seen the movie, but I’ve heard it’s a stinker.) CNN was doing a spot about the movie and Feynman’s legacy and they needed a token undergraduate to blab about Feynman so myself and the smartest physicist in my class, Sebastian Maurer, were interviewed for the spot. Sebastian attempted to get a quote on T.V. about the Feynman lectures on physics, which, if you listened to it carefully could actually be interpretted as a statement about Mao’s little red book (Feynman’s lectures on physics used to come as a series of three red books.) Here is what I said about Feynman:

Mention his name to physics students at Cal Tech[sic] today and watch their eyes light up: “One of the reasons it was easier to become a physicist was because he was so exciting and he wasn’t the typical, you know, nerd who doesn’t say anything,” said Cal Tech[sic] senior Dave Bacon

So you see, the above letter makes me realize that what I said was exactly the sort of thing which would have driven Feynman crazy. So I kind of feel like I’ve been chewed out from beyond the grave.

A Gibberish Theory

Last night I finished reading “Perfectly Reasonable Deviations From The Beaten Track: The Letters Of Richard P. Feynman” by T. Ferris (forward), R.P. Feynman (of course!), and M. Feynman (editor). I wouldn’t recommend this collection of letters to everyone, but it is interesting for those who have read a lot of the other stuff about or by Feynman as the letters help flesh out Feynman’s character. There really aren’t that many “anecdotal” letters in the book, which is, of course, what everyone comes to expect from a Feynman collection (to be known as one gigantic anecdote generating machine…what a legacy). However, the following letter, in which information was requested about the infamous Lars Onsager, is rather amusing:

On one occasion when we were standing together, a young man came up to explain his ideas on superconductivity to us both. I didn’t understand what the fellow was saying-so I thought it must be nonsense (a bad habit I have.) I was surprised to hear Onsager say, “Yes, that seems to be the solution to the problem.” Did he mean the puzzle of superconductivity was solved-and I didn’t even know what the young man said? I guess so. I have never been sure-I think the young man could have been Cooper. Could you check?

This reminds me of a picture of Feynman’s blackboard at the time of his death. On this blackboard is a list of things “TO LEARN.” Included on the list are “the Bethe Ansatz Prob.”, “Kondo”, “2-D Hall”, “accel Temp.”, “Non-linear classical Hydro”. “Kondo” has been crossed out. This list is amazing, first in that it contains probably some of the most interesting problems in physics (“accel Temp.” refers to the Unruh effect where a uniformaly accelerating observer observes the vacuum as a thermal bath with a temperature proportional to the acceleration.) But even more amazing is that the great Richard Feynman, who legend has always portrayed as knowing everything there is to know about physics, still had “to learn” these famous problems. One wonders if Feynman’s “to learn” was a lot different than everyone elses “to learn?”

Pigeons, Discrete Log, and Entropy

The pigeon hole principle states that if you put m pigeons into n<m holes, then there are at least two pigeons in one hole. Consider, for example, the following problem
Given a list [tex]$a_1,dots,a_n$[/tex] of residues mod [tex]$p$[/tex] and suppose [tex]$n>log_2 p$[/tex], find two distinct subsets [tex]$S_1,S_2 subset {1,2,dots,n}$[/tex] so that [tex]$prod_{i in S_1} a_i =prod_{i in S_2}a_j~{rm mod}~p$[/tex].
Here we have said “find”, but how do we even know that two subsets which solve this problem exist? The pigeon hole principle! The number of possible subsets of [tex] {1,2,dots,n}$[/tex] is [tex]$2^n$[/tex] (because each element either is or is not in the subset, and “is” and “is not” is binary, and so we have a binary string of length [tex]$n$[/tex].) But [tex]2^n>p[/tex], so we are putting more objects into our “holes” which run from [tex]$0$[/tex] to [tex]$p-1$[/tex]. Thus by the pigeon hole principle there must be two distinct subsets (two pigeons) whose product is the same.
In computational complexity the above problem belongs to a class of search problems called PPP, which, according to the complexity zoo, stands for “Polynomial Pigeonhole Principle.” These are problems where we are searching for a solution to an instance of a problem, this solution can be verified to be a solution in polynomial time, and the existence of a solution is guaranteed by a pigeon hole argument. The second of these should be familiar from the class NP, and the first of these is what you’re doing when you go beyond decision problems and actually want to find a solution.
PPP is interesting for various reasons, but one of the reasons I know of it is because it is at least as hard as the discrete log problem. In the discrete log problem is given two elements [tex]$a,b in {mathbb Z}_p$[/tex] find the smallest [tex]$s$[/tex] such that [tex]$a^s=b~{rm mod}~p$[/tex]. One of the neat things about a quantum computer is that it can efficiently solve the discrete log problem, whereas there is no known efficient classical algorithm.
So what do we know about the power of quantum computers to solve problems in PPP? Well we know a little. We know that a naive approach to solving these problems will fail. Suppose that we try to solve the above PPP problem by querying the function [tex]$f_{a_1,dots,a_n}(b_1,dots,b_n)=a_1^{b_1} a_2^{b_2} dots a_n^{b_n}$[/tex], where [tex]$b_i in {mathbb Z}_2[/tex]. Then a result by Scott Aaronson says that using a quantum algorithm requires [tex]$Omega((2^n)^{1/5})$[/tex] queries of this function (later improved to [tex]$Omega((2^n)^{1/3})$[/tex] by Yaoyun Shi) to solve this problem. So this naive approach to attacking problems in PPP fails. An interesting open question remains, however, whether there is a more sophisticated approach to efficiently solving problems in PPP using a quantum computer.
Interestingly, the problem Scott and Yaoyun work on is also relavent to a physicist in the lab. The problem they consider is called the collision problem. Suppose you have a function [tex]$f$[/tex] from [tex]${mathbb Z}_N$[/tex] to [tex]${mathbb Z}_N$[/tex] which is promised to be either 1-to-1 or 2-to-1 and the problem is to distinguish between these two cases. Scott and Yaoyun’s result says that if you do this you need to query this function [tex]$Omega((2^n)^{1/3})$[/tex] times (and, it turns out that there is a quantum algorithm which uses this number of queries) to distinguish between these two cases. Now how does this have relevance to a physicist? Suppose we follow the standard query method and query the function to produce the state [tex]$N^{-1/2} sum_{x in {mathbb Z}_N} |x>|f(x)>$[/tex]. Discarding the second register produces a mixed state of rank [tex]$N$[/tex] if [tex]$f$[/tex] is 1-to-1 but of rank [tex]$N/2$[/tex] if [tex]$f$[/tex] is 2-to-1. The entropy of these states is thus either [tex]$log_2 N$[/tex] or [tex]$log_2 N-1$[/tex] respectivally. Now suppose you are a physicist in the lab and you want to measure the entropy of a quantum state which you can prepare multiple copies of. You might be interested in how many copies you will need to measure this entropy. Scott and Yaoyun’s result then imply that you will need to perform a number of experiments which is at least [tex]$Omega(d^{1/3})$[/tex] such experiments to measure the entropy, where [tex]$d$[/tex] is the dimension of your quantum states. This is bad news if you really want to measure the entropy of a many-qubit system! In a related manner one can think of the period finding function of Shor’s algorithm as a method for determining the entropy for special class of states which have a particular promise (that they are periodic).
Discrete log is a pigeon problem, naive quantum algorithms for pigeon problems fail, and pigeon problems put limits on how efficiently we can measure the entropy of a system. It’s topics like these, which spread from computer science all the way to relevance for a experimental physics, which make me happy to be working in quantum information science.

Digital, Analog, Deterministic, Probabilistic

After reading Robert Laughlin’s book and thinking about his remarks on quantum computing, I’ve decided I can pinpoint exactly the error which he is making in his disregard for the possibility of building a quantum computer. The mistake, I believe, is basically a category error.
So let’s talk about computers. Computers are inheritly physical devices, so all discussion of what is and what is not a computer boils down, at some level to physics (I’m not taking a reductionist viewpoint here: I include the study of emergent properties of physical systems in what I call physics.) Now there are two separations which I want to make when I talk about what types of computers we encounter out in the wilds of the labratory: the first is the a description of the state space and the other is a description of the dynamics.
The state space of a classical computer come in two forms: digital and analog. For digital computers the state space is a discrete, finite space. Such state spaces don’t exist in nature (as far as we know) but are great approximations for a great series of physical devices. Thus the hard drive on your computer can store the value of a bit in the orientation of a ton of magnetic spins, but if we try to store a bit into a single spin we have problems. Every discrete finite space I know of is discrete only after some coarse graining. But surely, you say, quantum mechanics gives discrete state spaces, right? The energy levels from atomic orbitals are discrete. But this is only true if we have an isolated atomic system which does not interact with the rest of the universe. Introduce coupling to an electromagnetic field (acting either as the environment or as a measuring device) and these discrete energy levels “smear out,” meaning in effect that we can only think about them as discrete after we have coarse grained over this smearing. Whether there are truely discrete state spaces in physics (say in the wavefunction of the universe), I think, is something for which we have little concrete evidence.
Back to computers. On the opposite end of the spectrum from a digital computer, it has been useful to introduce the notion of an analog computer. Here the state space of the computer is taken to be some continuous parameter. Thus, for example, the position of a the planet Mercury could be considered a state space variable for our “solar system” computer. If one takes seriously the idea of an analog computer one runs into all sorts of problems. This is because it is easy to couple analog state spaces such that the variables of these systems are manipulated in such a way as to perform elementary arithmetic on these variables. And if you take seriously the continuous nature of the state space this allows you to do amazing things like solve NP-complete (and harder!) problems efficiently on these computers. Of course there is a fatal reason why no one has built such a computer: if we introduce any noise into these variables or we introduce imprecission in our measurements, this power disappears. (There is still interest in analog computers not, then, for the computational power arising from the continuous nature of their state spaces, but because the transformations one can perform on them are sometimes difficult to perform on a digital computer. There doesn’t seem to be any superpolynomial speedup in any such devices, but one might get more efficient algorithms because the dynamics happens to do something which is not natural for your digital computer.)
OK, so state spaces are digital or analog. And analog state spaces do not lead to robust computing devices. Now we move on to the second category describing classical computers, the dynamics. The first type of dynamics I want to draw out is deterministic evolution. In this case the time evolution of the state space is perfectly fixed by the initial conditions of the state space. So we set up our planets in a particular configuration and Newton’s laws (or general relativity) takes over and pushes forward the analog computation. Similarly two transistors work to enact a NAND gate where the digital data is transformed in a fully deterministic manner. As with the idea of digital data, deterministic evolution is often a consequence of some emergent property of the system (i.e. every once in a while your computer does have a hardware error…but you’ll have to wait around quite a long time for this to happen with modern computers!)
In the distinction between digital and analog, we saw that physics essentially gave us analog state spaces which for certain physical system, when we apply the appropriate coarse graining, we end up with a digital device. What, then, is the dynamics from which deterministic evolution arises for computer? For classical computers, this dynamics is basically probabilistic evolution. Instead of being able to completely predict the outcome of the evolution of the state space, now we can only predict the probability of the evolution. Sometimes this evolution is so unpredictable (so noisy) that there is no way for such a device to function as anything resembling a deterministic computer. But sometimes this evolution is predictable enough to allow us to construct a robust effectively deterministic computer from the evolution (and in the case of our real world computers this is effectively done for us by the physics of the semiconductor transistors.) It even seems that using the inherit probabilistic nature of the evolution can increase the computational power of the device (although this is certainly one of the outstanding problems in computer science: whether randomness really gives one any computational advantage (the BPP=P question).) So sometimes we not only want the probabilistic nature of our computer to yield deterministic evolution, but often we want it to evolve probabilistically, but in a manner such that we control the probabilities with a high degree of precission.
Notice further that the type of dynamics for a classical computer effects how we describe the state space of the computer. If the evolution is deterministic, we can just keep track of the variables, but one we introduce probabilistic evolution, we need to talk about probability distribtuions (over a discrete state space in the digital case and over a continuos state space in the analog case.)
So what is the category error which makes Laughlin sceptical of quantum computing? It appears that he is failing to distinguish between dynamics that is not deterministic and analog state spaces. The argument is basically that the amplitudes which appear in quantum theory are an analog state space and thus the computational power of quantum computers is analogous to that of analog classical computers. But this is to confuse the dynamics of a quantum computer (unitary evolution of probability amplitudes) with the state space itself. Among the first discoveries about quantum computers (which I basically credit to Bernstein and Vazirani) was that the amplitudes of quantum states behave, for most purposes, like probabilities and not like an analog state space. Indeed the state space of a quantum system can be either discrete (think energy levels) or continuous (think position of a particle). But since the dynamics is that funny form of evolution called unitary evolution, the description of either of these systems is in terms of probability amplitudes. Confusing the probability amplitudes with the state space is directly analogous to confusing the probability distribution of a classical system with the state space of the system.
Now just as probabilistically evolving digital classical computer can be made to act deterministically, we can talk about getting a digital quantum computer (digital means the state space is finite and discrete) to act deterministically. Or better yet, just as sometimes we desired probabilistic evolution with a given accuracy, we can get a digital quantum computer to behave as a unitary evolving systems with a given accuracy. This, of course, is what the theory of fault-tolerant quantum computation is designed to do. Of course the world didn’t have to give us the ability to perform fault-tolerant quantum computation, but this doesn’t seem to be true.
State spaces are different from the dynamics and hence the way we describe the system. In fact, it’s one of the reasons I cringe when I hear the word “analog” in discussions of not just quantum computers, but also in the biological sciences. The probabilistic nature of biological information processing has got nothing to do with the analog or digital nature of the underlying information (so while concentrations are analog, they should for all intensive purposes be thought of as a finite size frequentist version of probabilities.)
OK, I’ve clearly written too much on this subject. I’ll leave you be now. But just to finishing things off here is an interesting argument to give to those who believe that quantum computers are analog devices. As I mentioned above it is known how to use analog computers to solve NP-complete problems efficiently. But it is widely suspected that quantum computers will not be able to solve NP-complete problems efficiently (which in turn is perhaps a reason to think long and hard about whether this really true: I waver on about on three year period as to whether I should spend more time attempting to solve NP-complete problems efficiently on a quantum computer.) The question to ask a sceptic, then, is why is it, if quantum computers are analog computers that they cannot solve NP-complete (or harder) problems? Surely this indicates that there is not direct analogy between analog and quantum computers or one of the thousands of people working in the field would surely have discovered such a connection! Well, the argument might not work, but I’d be curious to hear the variety of sceptics retorts.

The History of My Brain

The brain is a remarkable computational device (not material? bah. get your faith out of my office.) A rough estimate of the computational power needed to simulate the brain’s neural power is around a billion MIPs (millions of instructions per second.) When I was growing up, I was fascinated by the possibilities of artificial intelligence. In fact, this forms the basis of my three ways to do theoretical physics:

  1. Do it yourself. This requires lots of work and the ego to believe you can actually figure things out. The benefit is that you can perhaps contribute now. Unfortunately, I don’t believe that this is a long term solution to doing theoretical physics (see 2 and 3)
  2. Build a computer to do it for you. Or build a computer to speed up certain functions in your brain. How long before we can build computers with vastly greater ability than our mushy brains? Are our brains really operating at some optimal limit for computation? For number crunching tasks, most definitely not. This solution seems to me the much more realistic long term way to contribute to theoretical physics. Of course, you may not get the credit for any of this. The computer that did all the work will surely the lions share of the credit. Can a computer win the Nobel prize?
  3. Join the Search for Extraterrestrial Intelligence (SETI). The theory here is that if we discover intelligent alliens they will more than likely be more advanced that we are. Thus they will have used 1 and 2 to have learned vastly more about theoretical physics than we currently know. So if you join SETI and SETI succeeds, you will just be able to ask the aliens about advanced theoretical physics. Of course, this assumes that the aliens are nice and not bug-like (because as every fan of sci-fi knows, bug-like aliens are all about killing fleshy humans.)

But back to the topic at hand: simulating the human brain. One problem with an analysis that just looks at the raw computational speed of the human brain is that it disregards the computational complexity inherit in the structure of the brain itself. In particular, the brain is the product of 500 million plus years of evolution (I want to start counting from at least some ancestor which had some sort of nervous system). So the question I’ve always found fascinating is how large this overhead of so many years of evolution costs in producing the structure of the brain? Do we need to simulate all 500 million plus years of evolution, yielding a computational power far beyond even our best projections of the computational power of the brain? This is particularly troublesome when you think about the power of evolution: our ancestors have been roaming around the landscape of possible brains for a long long time. A counter to this is the idea that we don’t need to simulate all of the evolution, but only the developemental biology which leads to a full grown human. This later point of view seems less computationally challenging, but still daunting. And I’ll bet that this is the computational hangup for simulating human intelligence: not the raw power of the brain, the but the power to reproduce the developement of the brain. I wonder if anyone has done any rough calculations on the computatonal power needed for this latter task.
As I said, when I was growing up, I was fascinated by the possibility of advanced computer intelligences. I can still remember the frustration of not being able to produce intelligence on my TRS-80 Color Computer II. So I left those dreams behind me, but I promised myself that if I ever thought computers were powerful enough to really start producing intelligence I would return to the field. I guess you shouldn’t be surprised, then, if in thirty years you find me hacking away in some dark basement beside a machine named “Newton.”

Unconventional Convention

Via the Preposterous Universe (must resist bad words about the University of Chicago for denying Sean Carroll tenure) comes the announcement of The Time Traveler Convention to be held at MIT. If anyone sees me there, could they please send me an email and tell me about it? Because I’m in Seattle, and if I do show up at MIT, then this clear means that I invented a time machine. And if this is true, I’ve got a lot of thinking to do about flux capacitors.

Bound Secrecy

One of the most fascinating early endeavours in quantum information theory was the discovery that pure bipartite quantum entanglement is a quantifiable resource. Thus, for instance, one can take many copies of the standard currency for bipartite pure entanglement, the singlet state, and create many copies of any other non-maximally entangled bipartite pure state. Similarly one can take many copies of a non-maximally entangled bipartite pure state and distill out copies of a singlet. The rates of these conversion processes is quantified by the Shannon entropy of one half of the non-maximally entangled state.
The situation, however, for mixed states is different. Here we find that there are states for which no pure entanglement can be distilled, but these mixed states are in fact entangled. These states are called bound entangled states because they require some pure entanglement in order to create them (thus the entanglement is bound into the mixed state.) Bound entangled states are strange beasts, being entangled, and yet not being able to be converted back to any sort of pure state entanglement currency.
One interesting use of a standard currency for bipartite pure state entanglement is in key generation. In these protocols, Alice and Bob distill noisy quantum states to obtain pure entangled states of some standard currency which can then be used to share a private key. Since an essential part of these protocols has been to distill pure entangled states, it seems, when you first think about it, that bound entangled states would not be useful for private key generation. However, this turns out to not be correct! Karol Horodecki, Michal Horodecki, Pawel Horodecki and Jonathan Oppenheim have shown (PRL 094, 160502 (2005), quant-ph/0309110) that one can obtain a secure key from bound entangled states! So, while bound entangled states often don’t like your standard non-bound entangled states, it turns out that for secret key generation, they are useful.

An Invitation to Crazyland

Next week I’m going to give a different kind of a seminar at the Santa Fe Institute. Here is the announcement:

David Krakauer & Dave Bacon are delighted to host the first Speaker in
our: “Possible Paths” seminar Series.
In this series we ask speakers to imagine the state of a scientific
field 1000 years into the future and to trace a possible path towards
that future, enumerating potential prize winning discoveries along the way.
We are positively ecstatic that our first speaker will be none other
than: DR DAVE BACON who will give the Thursday Colloq entitled:
“The Reconciliation of Quantum Theory and General Relativity: No Strings
Attached”
Please view the attached pdf file which gives a graphical insight into
the ambitious nature of Dr Bacon’s program & some of the fine minds he
will need to transcend along his path.
The first to correctly identify all members of the Baconian Solar System
wins Applause & Admiration at the Event.

Here is the attached photo (note: I didn’t make this photo, David Krakauer did and he thinks it’s very funny.)
A New Kind of Solar System
Basically the talk will be a crazy conglomeration of different observations I’ve made which lead me to believe the path towards reconciling quantum theory with gravity will look nothing like what we currently invission it to be. I’ll post a copy of the powerpoint once I finish the talk so that everyone can get some good hearty chuckles.

The Cost of Geodesics

Michael Nielsen has a nice post about the geodesic equation when there is a cosmological constant. This reminded me of an interesting property of geodesics in general relativity.
What Michael showed was how you can obtain the equation of geodesic motion in quick and easy steps, once you assume a form for the stress-energy tensor. A simple question to ask is to ask whether you can obtain the geodesic equation without such an assumption. And this is exactly what Einstein, Infield, and Hoffman showed, in 1938 (Annals of Mathematics, 39, p. 62). What these authors showed was that if you introduce singularities in the field equations, these singularities indeed follow geodesics. In these models, the geodesic equation indeed comes for “free” without any assumption about the stress-energy tensor.
It’s interesting to consider the analogous situation for Maxwell’s equations and for the equations of motion in a non-viscous fluid. Maxwell’s equations in vacuum, when we add singularities to the fields, do not lead properly to the equations of motion for charged particles. On the other hand, for the equation of motions for vortices, i.e. singularities, in a non-viscous fluid are determined by the equations of motion for a non-viscous fluid alone. It seems that the essential reason why this works is that the former equations are linear, while the later equations are non-linear.
Oh, and another interesting fact about the Einstein, Infeld, Hoffman derivation is that it doesn’t give a sign for the mass of the singularity: it gives no reason why gravitation is always attractive!