A Supporter

It’s always good to see the “quantum computing friendly” gain entry to high places:

Tony Hey to join Microsoft as Corporate Vice President
Professor Tony Hey is to join Microsoft Corporation as a corporate vice president to co-ordinate their Technical Computing Initiative. He will work across the company to co-ordinate Microsoft’s efforts to collaborate with the scientific community worldwide.
Currently Director of the UK’s e-Science Programme, Tony Hey has been a member of staff of the School of Electronics and Computer Science (ECS) since 1986, and was Head of School from 1994 to 1999.
He played a critical role in building the School into one of the UK’s leading academic departments, and has retained his Chair in the School throughout his period of secondment to UK e-Science.
‘This is wonderful news,’ said Professor Wendy Hall, Head of ECS, ‘and I am delighted to send our warmest congratulations to Tony on behalf of the School. His energy, vision, and sheer ability to make things happen will be of huge benefit to Microsoft and to future collaboration with researchers worldwide. At Southampton we are very glad that Tony will be retaining his Chair in the School of Electronics and Computer Science, and his strong links with the School and the University.’
Professor Hey is a Fellow of the Royal Academy of Engineering, the British Computer Society, the Institution of Electrical Engineers and a member of the Institution of Electrical and Electronic Engineers. In the New Year Honours List (2005) he received the CBE (Commander of the British Empire) for his services to science.
‘Today computation plays a critical role in advancing research across almost every field of science, yet far too often scientists must build all their own programming infrastructures and research their own algorithms to help advance their research effort,’ said Professor Hey. ‘By collaborating with the scientific community, I am confident that some of Microsoft’s advanced technologies can be used to accelerate their rate of discover.’
He has worked in the field of parallel and distributed computing since the early 1980s and was instrumental in the development of the MPI message-passing standard and in the Genesis Distributed Memory Parallel Benchmark suite. In 1991, he founded the Southampton Parallel Applications Centre (now the IT Innovation Centre), which has played a leading technology transfer role in Europe and the UK in collaborative industrial projects. His personal research interests are concerned with performance engineering for Grid applications but he also retains an interest in experimental explorations of quantum computing and quantum information theory.

For those who don’t remember, Tony Hey was the editor for “Feynman Lectures on Computation” and the companion volume entitled “Feynman and Computation.”

Quantum Documentary?

One question I’ve had for a long time is why there hasn’t been a good documentary produced about the field of quantum information science. There is certainly a feeling that quantum computing has leaked into the mainstream of geeks through various sci-fi stories, articles in Scientific American, and postings on Slashdot, and yet there hasn’t been a NOVA style documentary produced on the field in spite of the (I think) fascinating results which have emerged from the field in the last few decades (we’ve got to count quantum cryptography!)

Summer Teaching

This summer I’m teaching a course in the professional masters program here at UW:

CSE P 590 TU: Quantum Computing
Dave Bacon – Instructor
Day/Time: Wednesday 6:30-9:20 pm; Place: TBD
(First time offering.) An introduction to and survey of the field of quantum computing. Quantum computation is an emerging field whose goal is to design effectively atomic sized computers which exploit the parallelism of the quantum mechanical laws of the universe. While this sounds futuristic, quantum computers are fast becoming a reality, and have the potential to revolutionize computation over the next twenty years. Topics include quantum algorithms, quantum error correction, and quantum cryptography. This course will cover why quantum computers can break certain public key cryptosystems, the engineering challenges in building a physical quantum computing device, and the level of security assured by quantum crytopgraphic devices. Prior knowledge of quantum theory is not necessary.

I just put up a preliminary version of the course website. The one issue which is bugging me a lot as I try to think about this class is exactly how to deal with the nearly three hour length of these classes. I may be a funny looking guy, but am I really as entertaining as the extended versions of the Lord of the Rings?

Quantum CMOS?

If you go back and look at the early days of the semiconductor revolution, you see that there were many ideas for how to build gate sets out of n-type and p-type transistors. One set of ideas was known as NMOS technology and other set of ideas was known as CMOS technology. In almost every modern computer chip CMOS technology currently reigns supreme. One of the main reasons for this has to do with the power dissipation properties of these technologies. CMOS circuits dissipate a lot less energy than NMOS circuits. Basically one may think that CMOS circuits dissipate energy only when the circuit voltages are being switched wheras NMOS circuits are often running in a situation where they are dissipating energy even in between the switching of the voltages. As you can guess, this makes quite a bit of difference in the power consumption details of these devices.
So what is the equivalent technological detail for a quantum computer? Here the concept of the transistors used to build a circuit is modified to the concept of a fault-tolerant quantum gate. So if we look at fault-tolerant gate constructions and consider their energetics, are there technologies that minimize energy dissipation?

Digital, Analog, Deterministic, Probabilistic

After reading Robert Laughlin’s book and thinking about his remarks on quantum computing, I’ve decided I can pinpoint exactly the error which he is making in his disregard for the possibility of building a quantum computer. The mistake, I believe, is basically a category error.
So let’s talk about computers. Computers are inheritly physical devices, so all discussion of what is and what is not a computer boils down, at some level to physics (I’m not taking a reductionist viewpoint here: I include the study of emergent properties of physical systems in what I call physics.) Now there are two separations which I want to make when I talk about what types of computers we encounter out in the wilds of the labratory: the first is the a description of the state space and the other is a description of the dynamics.
The state space of a classical computer come in two forms: digital and analog. For digital computers the state space is a discrete, finite space. Such state spaces don’t exist in nature (as far as we know) but are great approximations for a great series of physical devices. Thus the hard drive on your computer can store the value of a bit in the orientation of a ton of magnetic spins, but if we try to store a bit into a single spin we have problems. Every discrete finite space I know of is discrete only after some coarse graining. But surely, you say, quantum mechanics gives discrete state spaces, right? The energy levels from atomic orbitals are discrete. But this is only true if we have an isolated atomic system which does not interact with the rest of the universe. Introduce coupling to an electromagnetic field (acting either as the environment or as a measuring device) and these discrete energy levels “smear out,” meaning in effect that we can only think about them as discrete after we have coarse grained over this smearing. Whether there are truely discrete state spaces in physics (say in the wavefunction of the universe), I think, is something for which we have little concrete evidence.
Back to computers. On the opposite end of the spectrum from a digital computer, it has been useful to introduce the notion of an analog computer. Here the state space of the computer is taken to be some continuous parameter. Thus, for example, the position of a the planet Mercury could be considered a state space variable for our “solar system” computer. If one takes seriously the idea of an analog computer one runs into all sorts of problems. This is because it is easy to couple analog state spaces such that the variables of these systems are manipulated in such a way as to perform elementary arithmetic on these variables. And if you take seriously the continuous nature of the state space this allows you to do amazing things like solve NP-complete (and harder!) problems efficiently on these computers. Of course there is a fatal reason why no one has built such a computer: if we introduce any noise into these variables or we introduce imprecission in our measurements, this power disappears. (There is still interest in analog computers not, then, for the computational power arising from the continuous nature of their state spaces, but because the transformations one can perform on them are sometimes difficult to perform on a digital computer. There doesn’t seem to be any superpolynomial speedup in any such devices, but one might get more efficient algorithms because the dynamics happens to do something which is not natural for your digital computer.)
OK, so state spaces are digital or analog. And analog state spaces do not lead to robust computing devices. Now we move on to the second category describing classical computers, the dynamics. The first type of dynamics I want to draw out is deterministic evolution. In this case the time evolution of the state space is perfectly fixed by the initial conditions of the state space. So we set up our planets in a particular configuration and Newton’s laws (or general relativity) takes over and pushes forward the analog computation. Similarly two transistors work to enact a NAND gate where the digital data is transformed in a fully deterministic manner. As with the idea of digital data, deterministic evolution is often a consequence of some emergent property of the system (i.e. every once in a while your computer does have a hardware error…but you’ll have to wait around quite a long time for this to happen with modern computers!)
In the distinction between digital and analog, we saw that physics essentially gave us analog state spaces which for certain physical system, when we apply the appropriate coarse graining, we end up with a digital device. What, then, is the dynamics from which deterministic evolution arises for computer? For classical computers, this dynamics is basically probabilistic evolution. Instead of being able to completely predict the outcome of the evolution of the state space, now we can only predict the probability of the evolution. Sometimes this evolution is so unpredictable (so noisy) that there is no way for such a device to function as anything resembling a deterministic computer. But sometimes this evolution is predictable enough to allow us to construct a robust effectively deterministic computer from the evolution (and in the case of our real world computers this is effectively done for us by the physics of the semiconductor transistors.) It even seems that using the inherit probabilistic nature of the evolution can increase the computational power of the device (although this is certainly one of the outstanding problems in computer science: whether randomness really gives one any computational advantage (the BPP=P question).) So sometimes we not only want the probabilistic nature of our computer to yield deterministic evolution, but often we want it to evolve probabilistically, but in a manner such that we control the probabilities with a high degree of precission.
Notice further that the type of dynamics for a classical computer effects how we describe the state space of the computer. If the evolution is deterministic, we can just keep track of the variables, but one we introduce probabilistic evolution, we need to talk about probability distribtuions (over a discrete state space in the digital case and over a continuos state space in the analog case.)
So what is the category error which makes Laughlin sceptical of quantum computing? It appears that he is failing to distinguish between dynamics that is not deterministic and analog state spaces. The argument is basically that the amplitudes which appear in quantum theory are an analog state space and thus the computational power of quantum computers is analogous to that of analog classical computers. But this is to confuse the dynamics of a quantum computer (unitary evolution of probability amplitudes) with the state space itself. Among the first discoveries about quantum computers (which I basically credit to Bernstein and Vazirani) was that the amplitudes of quantum states behave, for most purposes, like probabilities and not like an analog state space. Indeed the state space of a quantum system can be either discrete (think energy levels) or continuous (think position of a particle). But since the dynamics is that funny form of evolution called unitary evolution, the description of either of these systems is in terms of probability amplitudes. Confusing the probability amplitudes with the state space is directly analogous to confusing the probability distribution of a classical system with the state space of the system.
Now just as probabilistically evolving digital classical computer can be made to act deterministically, we can talk about getting a digital quantum computer (digital means the state space is finite and discrete) to act deterministically. Or better yet, just as sometimes we desired probabilistic evolution with a given accuracy, we can get a digital quantum computer to behave as a unitary evolving systems with a given accuracy. This, of course, is what the theory of fault-tolerant quantum computation is designed to do. Of course the world didn’t have to give us the ability to perform fault-tolerant quantum computation, but this doesn’t seem to be true.
State spaces are different from the dynamics and hence the way we describe the system. In fact, it’s one of the reasons I cringe when I hear the word “analog” in discussions of not just quantum computers, but also in the biological sciences. The probabilistic nature of biological information processing has got nothing to do with the analog or digital nature of the underlying information (so while concentrations are analog, they should for all intensive purposes be thought of as a finite size frequentist version of probabilities.)
OK, I’ve clearly written too much on this subject. I’ll leave you be now. But just to finishing things off here is an interesting argument to give to those who believe that quantum computers are analog devices. As I mentioned above it is known how to use analog computers to solve NP-complete problems efficiently. But it is widely suspected that quantum computers will not be able to solve NP-complete problems efficiently (which in turn is perhaps a reason to think long and hard about whether this really true: I waver on about on three year period as to whether I should spend more time attempting to solve NP-complete problems efficiently on a quantum computer.) The question to ask a sceptic, then, is why is it, if quantum computers are analog computers that they cannot solve NP-complete (or harder) problems? Surely this indicates that there is not direct analogy between analog and quantum computers or one of the thousands of people working in the field would surely have discovered such a connection! Well, the argument might not work, but I’d be curious to hear the variety of sceptics retorts.

The History of My Brain

The brain is a remarkable computational device (not material? bah. get your faith out of my office.) A rough estimate of the computational power needed to simulate the brain’s neural power is around a billion MIPs (millions of instructions per second.) When I was growing up, I was fascinated by the possibilities of artificial intelligence. In fact, this forms the basis of my three ways to do theoretical physics:

  1. Do it yourself. This requires lots of work and the ego to believe you can actually figure things out. The benefit is that you can perhaps contribute now. Unfortunately, I don’t believe that this is a long term solution to doing theoretical physics (see 2 and 3)
  2. Build a computer to do it for you. Or build a computer to speed up certain functions in your brain. How long before we can build computers with vastly greater ability than our mushy brains? Are our brains really operating at some optimal limit for computation? For number crunching tasks, most definitely not. This solution seems to me the much more realistic long term way to contribute to theoretical physics. Of course, you may not get the credit for any of this. The computer that did all the work will surely the lions share of the credit. Can a computer win the Nobel prize?
  3. Join the Search for Extraterrestrial Intelligence (SETI). The theory here is that if we discover intelligent alliens they will more than likely be more advanced that we are. Thus they will have used 1 and 2 to have learned vastly more about theoretical physics than we currently know. So if you join SETI and SETI succeeds, you will just be able to ask the aliens about advanced theoretical physics. Of course, this assumes that the aliens are nice and not bug-like (because as every fan of sci-fi knows, bug-like aliens are all about killing fleshy humans.)

But back to the topic at hand: simulating the human brain. One problem with an analysis that just looks at the raw computational speed of the human brain is that it disregards the computational complexity inherit in the structure of the brain itself. In particular, the brain is the product of 500 million plus years of evolution (I want to start counting from at least some ancestor which had some sort of nervous system). So the question I’ve always found fascinating is how large this overhead of so many years of evolution costs in producing the structure of the brain? Do we need to simulate all 500 million plus years of evolution, yielding a computational power far beyond even our best projections of the computational power of the brain? This is particularly troublesome when you think about the power of evolution: our ancestors have been roaming around the landscape of possible brains for a long long time. A counter to this is the idea that we don’t need to simulate all of the evolution, but only the developemental biology which leads to a full grown human. This later point of view seems less computationally challenging, but still daunting. And I’ll bet that this is the computational hangup for simulating human intelligence: not the raw power of the brain, the but the power to reproduce the developement of the brain. I wonder if anyone has done any rough calculations on the computatonal power needed for this latter task.
As I said, when I was growing up, I was fascinated by the possibility of advanced computer intelligences. I can still remember the frustration of not being able to produce intelligence on my TRS-80 Color Computer II. So I left those dreams behind me, but I promised myself that if I ever thought computers were powerful enough to really start producing intelligence I would return to the field. I guess you shouldn’t be surprised, then, if in thirty years you find me hacking away in some dark basement beside a machine named “Newton.”

Did We Overgeneralize to the Hidden Subgroup Problem?

The most famous problem in quantum computing is the hidden subgroup problem. In the hidden subgroup problem you are given access to an oracle which computers a function f. This function is a function from a group G to some set with a promise on the function. The promise is that f is constant and distinct on left cosets of some subgroup H of G. The goal of the hidden subgroup problem is, by querring the function, to determine a generating set for the subgroup H. We say that an algorithm for the hidden subgroup problem is efficient if it uses resources proportional to a polynomial in the logarithm of the size of the group.
The reason the hidden subgroup problem is famous is threefold. The first reason is that when G is Abelian, efficiently solving the hidden subgroup problem leads to an efficient algorithm for factoring integers and for the discrete logarithm problem, i.e. to Shor’s algorithm(s). The second reason is that efficiently solving the hidden subgroup problem when G is the symmetric group would lead to an efficient algorithm for the graph isomorphism problem. The third reason is that efficiently solving the hidden subgroup problem when G is the dihedral group would lead to an efficient algorithm for certain shortest vector in a lattice problems. The first of these reasons is a concrete reason: a quantum algorithm worked! But for the second and third, no efficient quantum algorithm is known.
Now what I find interesting, is that for the second of these reasons, for graph isomorphism, one can get by with solving a much more restricted version of the hidden subgroup problem than the full hidden subgroup problem over the symmetric group. The way to do this is to modify the problem to a problem I call the hidden subgroup conjugacy problem. Call two subgroups, H and K conjugate to each other if there exists an element g of the group G such that gHg^{-1}=K. In the hidden subgroup conjugacy problem, instead of identifying the subgroup (by returning a set of generators, for example) we require that you only identify which conjugacy the subgroup belongs to, i.e. the setup is the same: you are given an f which hides a subgroup H, but now instead of identifying the subgroup you only want to know which conjugacy class the subgroup belongs to. It is easy to see that for the Ableian case, this reduces to the hidden subgroup problem: gHg^{-1}=H for all g in this case. For graph isomorphism, the standard reduction to the hidden subgroup problem reduces to distinguishing between the subgroup being a trivial subgroup and the subgroup being a order two subgroup. So efficiently solving the hidden subgroup conjugacy problem would lead to an efficient algorithm for graph isomorphism. Interesting, the same does not seem to be true for the reduction from the hidden subgroup to certain shortest vector in a lattice problems, although I haven’t thought hard about this.
So the question I ask is, did we overgeneralize the hidden subgroup problem? Did we generalize ourselves into a problem which is just too hard, while efficiently solving a smaller variant would lead to interesting new quantum algorithms? I leave history to judge.

The Title is Subtle but not Malicious

In a clear demonstration that Scott Aaronson has spent to much time on the campus of the Institute for Advanced Study, I find that last week he posted cs.cc/0504048:

Oracles Are Subtle But Not Malicious
Scott Aaronson
Theoretical computer scientists have been debating the role of oracles since the 1970’s. This paper illustrates both that oracles can give us nontrivial insights about the barrier problems in circuit complexity, and that they need not prevent us from trying to solve those problems.
First, we give an oracle relative to which PP has linear-sized circuits, by proving a new lower bound for perceptrons and low- degree threshold polynomials. This oracle settles a longstanding open question, and generalizes earlier results due to Beigel and to Buhrman, Fortnow, and Thierauf. More importantly, it implies the first nonrelativizing separation of “traditional” complexity classes, as opposed to interactive proof classes such as MIP and MA-EXP. For Vinodchandran showed, by a nonrelativizing argument, that PP does not have circuits of size n^k for any fixed k. We present an alternative proof of this fact, which shows that PP does not even have quantum circuits of size n^k with quantum advice. To our knowledge, this is the first nontrivial lower bound on quantum circuit size.
Second, we study a beautiful algorithm of Bshouty et al. for learning Boolean circuits in ZPP^NP. We show that the NP queries in this algorithm cannot be parallelized by any relativizing technique, by giving an oracle relative to which ZPP^||NP and even BPP^||NP have linear-size circuits. On the other hand, we also show that the NP queries could be parallelized if P=NP. Thus, classes such as ZPP^||NP inhabit a “twilight zone,” where we need to distinguish between relativizing and black-box techniques. Our results on this subject have implications for computational learning theory as well as for the circuit minimization problem.

In computational complexity, one often considers the power of a complexity class when you are given access to a black box which solve a problem from a different computational complexity class. This is where all the funny raising computational complexity classes to a power comes from. Thus, for example, in Scott’s abstract he talks about ZPP^NP which is the complexity class zero-error probabilistic polynomial time which has access to an oracle which returns answers (at unit cost) to the computational complexity class NP. Whereas philosophers for eons have pondered Gods, computer scientists have rigorously defined their Gods, and found the world a polytheistic conglomeration of computational complexity classes.
With the notion of an oracle, one can begin to ponder whether certain results you prove in computational complexity are robust under the use of oracles. Suppose you have shown some inclusion in computational complexity theory. Then you can consider whether this result is robust when you consider the same inclusion, but now when the computational complexity classes have access to the same oracle. If the result hold for all possible choices of oracles, then we say that the result is relativizing. Why does this matter? Well, one reason is we know that if we are going to prove P does not equal NP, then we know that this must be done by a nonrelativizing method (this follows from the fact that there exists oracles A and B relative to which P^A=NP^A and P^B NP^B.)
What Scott shows in his paper is that there exists an oracle relative to which the computational complexity class PP (or, you know for those of us who love quantum theory, the computational complexity class post-BQP, i.e. bounded-error quantum polynomial time with postselection) has linear sized circuits. Why is this important? Well, because it has been shown that the computational complexity class PP does not have linear sized circuits via a proof constructed by Vinodchandran. This means that the result of Vinodchandran is nonrelativizing! Nonrelativizing proofs in computational complexity are, especially because of their connection to the P/NP question, totally awesome. It’s like you’ve separated a mortal from a God but only when the other Gods aren’t around to help with the battle. Thinking about all this really makes me want to write a comic strip based on computational complexity classes.

More Papers

When it rains, it pours! Andris points me to another interesting paper, this time by Shengyu Zhang (Princeton), quant-ph/0504085:

(Almost) tight bounds for randomized and quantum Local Search on hypercubes and grids
Shengyu Zhang
The Local Search problem, which finds a local minimum of a black-box function on a given graph, is of both practical and theoretical importance to many areas in computer science and natural sciences. In this paper, we show that for the Boolean hypercube $B^n$, the randomized query complexity of Local Search is $Theta(2^{n/2}n^{1/2})$ and the quantum query complexity is $Theta(2^{n/3}n^{1/6})$. We also show that for the constant dimensional grid $[N^{1/d}]^d$, the randomized query complexity is $Theta(N^{1/2})$ for $d geq 4$ and the quantum query complexity is $Theta(N^{1/3})$ for $d geq 6$. New lower bounds for lower dimensional grids are also given. These improve the previous results by Aaronson [STOC’04], and Santha and Szegedy [STOC’04]. Finally we show for $[N^{1/2}]^2$ a new upper bound of $O(N^{1/4}(loglog N)^{3/2})$ on the quantum query complexity, which implies that Local Search on grids exhibits different properties at low dimensions.

In the local search problem, one is given a graph and a function on this graph from its vertices to the natural numbers and one seeks to return a vertex such that for all of the neighbors of this vertex the function obtains a greater value on those neighboring vertices. Here, Zhang is able to demonstrate a remarkable number of matching bounds for this problem in both the randomized classical model and the quantum model.
Another interesting paper which appeared today is Michael Nielsen’s (University of Queensland) summary (plus a bit more) about cluster state quantum computing, quant-ph/0504097:

Cluster-state quantum computation
Michael A. Nielsen
This article is a short introduction to and review of the cluster-state model of quantum computation, in which coherent quantum information processing is accomplished via a sequence of single-qubit measurements applied to a fixed quantum state known as a cluster state. We also discuss a few novel properties of the model, including a proof that the cluster state cannot occur as the exact ground state of any naturally occurring physical system, and a proof that measurements on any quantum state which is linearly prepared in one dimension can be efficiently simulated on a classical computer, and thus are not candidates for use as a substrate for quantum computation.

This is a nice review, which contains some interesting bonus(!) results. The first is that the cluster state cannot be the ground state of any two-body Hamiltonian. This is particularly interesting because one way one might imagine realizing the cluster state is by engineering the appropriate Hamiltonian and then cooling the system a ground state which is the cluster state. The other important result in this paper is that certain one dimensional quantum systems aren’t useful for quantum computation. In particular one dimensional quantum systems which are prepare via a linear series of unitary evolutions can be efficiently simulated on a classical computer. Michael shows that this is true for qubit circuits, I wonder how this scales when we replace these qubits by qudits? Also I see there is a note about this later result also being shown by Steve Flammia (University of New Mexico), Bryan Eastin(University of New Mexico), and Andrew Doherty(University of Queensland). (Correction: it seems I can’t parse a sentence. The later result is joint work with Andrew Doherty(University of Queensland) while Steve and Brian are acknowledge for introducing clusters states to the qcircut latex package.)

More Good FOCS Fodder

Another cool paper on the arXiv today, this time by Iordanis Kerenidis(MIT)

Quantum multiparty communication complexity and circuit lower bounds
Iordanis Kerenidis
We define a quantum model for multiparty communication complexity and prove a simulation theorem between the classical and quantum models. As a result of our simulation, we show that if the quantum k-party communication complexity of a function f is $Omega(n/2^k)$, then its classical k-party communication is $Omega(n/2^{k/2})$. Finding such an f would allow us to prove strong classical lower bounds for (k>log n) players and hence resolve a main open question about symmetric circuits. Furthermore, we prove that for the Generalized Inner Product (GIP) function, the quantum model is exponentially more efficient than the classical one. This provides the first exponential separation for a total function between any quantum and public coin randomized communication model.

The first part of the abstract refers to one of the grand hopes of the field of communication complexity: to prove that a function cannot be computed with ACC circuits (constant depth, polynomial size, circuits with unbounded fan-in, and mod gates.) It turns out that finding such an f corresponds to proving a superlogarithmlic lower bound for a k-party communication complexity problem. What Iordanis does is show an explicit connection between lower bound between classical multiparty communication complexity results and quantum multiplarty communication complexity. The hope here is then that the proofs in the quantum world will turn out to be simpler than in the classical world, where all attempts to get functions out of ACC have failed. Did we ever imagine there would be a day when the quantum world was considered simpler than the classical world? Yeah for the new generation.
In the second part of the paper, Iordanis shows an exponential separation between classical communication complexity and quantum communication for a total function. A total function is a function which is defined over all of it’s inputs. Previously all the functions for which there was an exponential separation were promise problems or relations. This is quite cool for the world of communication complexity. This follows earlier work by Iordanis and coworkers Ziv Bar-Yossef and T.S. Jayram where they were able to show an exponential quantum-classical separation for a communication complexity problem which used only one way communication. Quite a roll to be on!