Did We Overgeneralize to the Hidden Subgroup Problem?

The most famous problem in quantum computing is the hidden subgroup problem. In the hidden subgroup problem you are given access to an oracle which computers a function f. This function is a function from a group G to some set with a promise on the function. The promise is that f is constant and distinct on left cosets of some subgroup H of G. The goal of the hidden subgroup problem is, by querring the function, to determine a generating set for the subgroup H. We say that an algorithm for the hidden subgroup problem is efficient if it uses resources proportional to a polynomial in the logarithm of the size of the group.
The reason the hidden subgroup problem is famous is threefold. The first reason is that when G is Abelian, efficiently solving the hidden subgroup problem leads to an efficient algorithm for factoring integers and for the discrete logarithm problem, i.e. to Shor’s algorithm(s). The second reason is that efficiently solving the hidden subgroup problem when G is the symmetric group would lead to an efficient algorithm for the graph isomorphism problem. The third reason is that efficiently solving the hidden subgroup problem when G is the dihedral group would lead to an efficient algorithm for certain shortest vector in a lattice problems. The first of these reasons is a concrete reason: a quantum algorithm worked! But for the second and third, no efficient quantum algorithm is known.
Now what I find interesting, is that for the second of these reasons, for graph isomorphism, one can get by with solving a much more restricted version of the hidden subgroup problem than the full hidden subgroup problem over the symmetric group. The way to do this is to modify the problem to a problem I call the hidden subgroup conjugacy problem. Call two subgroups, H and K conjugate to each other if there exists an element g of the group G such that gHg^{-1}=K. In the hidden subgroup conjugacy problem, instead of identifying the subgroup (by returning a set of generators, for example) we require that you only identify which conjugacy the subgroup belongs to, i.e. the setup is the same: you are given an f which hides a subgroup H, but now instead of identifying the subgroup you only want to know which conjugacy class the subgroup belongs to. It is easy to see that for the Ableian case, this reduces to the hidden subgroup problem: gHg^{-1}=H for all g in this case. For graph isomorphism, the standard reduction to the hidden subgroup problem reduces to distinguishing between the subgroup being a trivial subgroup and the subgroup being a order two subgroup. So efficiently solving the hidden subgroup conjugacy problem would lead to an efficient algorithm for graph isomorphism. Interesting, the same does not seem to be true for the reduction from the hidden subgroup to certain shortest vector in a lattice problems, although I haven’t thought hard about this.
So the question I ask is, did we overgeneralize the hidden subgroup problem? Did we generalize ourselves into a problem which is just too hard, while efficiently solving a smaller variant would lead to interesting new quantum algorithms? I leave history to judge.

Bound Secrecy

One of the most fascinating early endeavours in quantum information theory was the discovery that pure bipartite quantum entanglement is a quantifiable resource. Thus, for instance, one can take many copies of the standard currency for bipartite pure entanglement, the singlet state, and create many copies of any other non-maximally entangled bipartite pure state. Similarly one can take many copies of a non-maximally entangled bipartite pure state and distill out copies of a singlet. The rates of these conversion processes is quantified by the Shannon entropy of one half of the non-maximally entangled state.
The situation, however, for mixed states is different. Here we find that there are states for which no pure entanglement can be distilled, but these mixed states are in fact entangled. These states are called bound entangled states because they require some pure entanglement in order to create them (thus the entanglement is bound into the mixed state.) Bound entangled states are strange beasts, being entangled, and yet not being able to be converted back to any sort of pure state entanglement currency.
One interesting use of a standard currency for bipartite pure state entanglement is in key generation. In these protocols, Alice and Bob distill noisy quantum states to obtain pure entangled states of some standard currency which can then be used to share a private key. Since an essential part of these protocols has been to distill pure entangled states, it seems, when you first think about it, that bound entangled states would not be useful for private key generation. However, this turns out to not be correct! Karol Horodecki, Michal Horodecki, Pawel Horodecki and Jonathan Oppenheim have shown (PRL 094, 160502 (2005), quant-ph/0309110) that one can obtain a secure key from bound entangled states! So, while bound entangled states often don’t like your standard non-bound entangled states, it turns out that for secret key generation, they are useful.

The Title is Subtle but not Malicious

In a clear demonstration that Scott Aaronson has spent to much time on the campus of the Institute for Advanced Study, I find that last week he posted cs.cc/0504048:

Oracles Are Subtle But Not Malicious
Scott Aaronson
Theoretical computer scientists have been debating the role of oracles since the 1970’s. This paper illustrates both that oracles can give us nontrivial insights about the barrier problems in circuit complexity, and that they need not prevent us from trying to solve those problems.
First, we give an oracle relative to which PP has linear-sized circuits, by proving a new lower bound for perceptrons and low- degree threshold polynomials. This oracle settles a longstanding open question, and generalizes earlier results due to Beigel and to Buhrman, Fortnow, and Thierauf. More importantly, it implies the first nonrelativizing separation of “traditional” complexity classes, as opposed to interactive proof classes such as MIP and MA-EXP. For Vinodchandran showed, by a nonrelativizing argument, that PP does not have circuits of size n^k for any fixed k. We present an alternative proof of this fact, which shows that PP does not even have quantum circuits of size n^k with quantum advice. To our knowledge, this is the first nontrivial lower bound on quantum circuit size.
Second, we study a beautiful algorithm of Bshouty et al. for learning Boolean circuits in ZPP^NP. We show that the NP queries in this algorithm cannot be parallelized by any relativizing technique, by giving an oracle relative to which ZPP^||NP and even BPP^||NP have linear-size circuits. On the other hand, we also show that the NP queries could be parallelized if P=NP. Thus, classes such as ZPP^||NP inhabit a “twilight zone,” where we need to distinguish between relativizing and black-box techniques. Our results on this subject have implications for computational learning theory as well as for the circuit minimization problem.

In computational complexity, one often considers the power of a complexity class when you are given access to a black box which solve a problem from a different computational complexity class. This is where all the funny raising computational complexity classes to a power comes from. Thus, for example, in Scott’s abstract he talks about ZPP^NP which is the complexity class zero-error probabilistic polynomial time which has access to an oracle which returns answers (at unit cost) to the computational complexity class NP. Whereas philosophers for eons have pondered Gods, computer scientists have rigorously defined their Gods, and found the world a polytheistic conglomeration of computational complexity classes.
With the notion of an oracle, one can begin to ponder whether certain results you prove in computational complexity are robust under the use of oracles. Suppose you have shown some inclusion in computational complexity theory. Then you can consider whether this result is robust when you consider the same inclusion, but now when the computational complexity classes have access to the same oracle. If the result hold for all possible choices of oracles, then we say that the result is relativizing. Why does this matter? Well, one reason is we know that if we are going to prove P does not equal NP, then we know that this must be done by a nonrelativizing method (this follows from the fact that there exists oracles A and B relative to which P^A=NP^A and P^B NP^B.)
What Scott shows in his paper is that there exists an oracle relative to which the computational complexity class PP (or, you know for those of us who love quantum theory, the computational complexity class post-BQP, i.e. bounded-error quantum polynomial time with postselection) has linear sized circuits. Why is this important? Well, because it has been shown that the computational complexity class PP does not have linear sized circuits via a proof constructed by Vinodchandran. This means that the result of Vinodchandran is nonrelativizing! Nonrelativizing proofs in computational complexity are, especially because of their connection to the P/NP question, totally awesome. It’s like you’ve separated a mortal from a God but only when the other Gods aren’t around to help with the battle. Thinking about all this really makes me want to write a comic strip based on computational complexity classes.

More Papers

When it rains, it pours! Andris points me to another interesting paper, this time by Shengyu Zhang (Princeton), quant-ph/0504085:

(Almost) tight bounds for randomized and quantum Local Search on hypercubes and grids
Shengyu Zhang
The Local Search problem, which finds a local minimum of a black-box function on a given graph, is of both practical and theoretical importance to many areas in computer science and natural sciences. In this paper, we show that for the Boolean hypercube $B^n$, the randomized query complexity of Local Search is $Theta(2^{n/2}n^{1/2})$ and the quantum query complexity is $Theta(2^{n/3}n^{1/6})$. We also show that for the constant dimensional grid $[N^{1/d}]^d$, the randomized query complexity is $Theta(N^{1/2})$ for $d geq 4$ and the quantum query complexity is $Theta(N^{1/3})$ for $d geq 6$. New lower bounds for lower dimensional grids are also given. These improve the previous results by Aaronson [STOC’04], and Santha and Szegedy [STOC’04]. Finally we show for $[N^{1/2}]^2$ a new upper bound of $O(N^{1/4}(loglog N)^{3/2})$ on the quantum query complexity, which implies that Local Search on grids exhibits different properties at low dimensions.

In the local search problem, one is given a graph and a function on this graph from its vertices to the natural numbers and one seeks to return a vertex such that for all of the neighbors of this vertex the function obtains a greater value on those neighboring vertices. Here, Zhang is able to demonstrate a remarkable number of matching bounds for this problem in both the randomized classical model and the quantum model.
Another interesting paper which appeared today is Michael Nielsen’s (University of Queensland) summary (plus a bit more) about cluster state quantum computing, quant-ph/0504097:

Cluster-state quantum computation
Michael A. Nielsen
This article is a short introduction to and review of the cluster-state model of quantum computation, in which coherent quantum information processing is accomplished via a sequence of single-qubit measurements applied to a fixed quantum state known as a cluster state. We also discuss a few novel properties of the model, including a proof that the cluster state cannot occur as the exact ground state of any naturally occurring physical system, and a proof that measurements on any quantum state which is linearly prepared in one dimension can be efficiently simulated on a classical computer, and thus are not candidates for use as a substrate for quantum computation.

This is a nice review, which contains some interesting bonus(!) results. The first is that the cluster state cannot be the ground state of any two-body Hamiltonian. This is particularly interesting because one way one might imagine realizing the cluster state is by engineering the appropriate Hamiltonian and then cooling the system a ground state which is the cluster state. The other important result in this paper is that certain one dimensional quantum systems aren’t useful for quantum computation. In particular one dimensional quantum systems which are prepare via a linear series of unitary evolutions can be efficiently simulated on a classical computer. Michael shows that this is true for qubit circuits, I wonder how this scales when we replace these qubits by qudits? Also I see there is a note about this later result also being shown by Steve Flammia (University of New Mexico), Bryan Eastin(University of New Mexico), and Andrew Doherty(University of Queensland). (Correction: it seems I can’t parse a sentence. The later result is joint work with Andrew Doherty(University of Queensland) while Steve and Brian are acknowledge for introducing clusters states to the qcircut latex package.)

More Good FOCS Fodder

Another cool paper on the arXiv today, this time by Iordanis Kerenidis(MIT)

Quantum multiparty communication complexity and circuit lower bounds
Iordanis Kerenidis
We define a quantum model for multiparty communication complexity and prove a simulation theorem between the classical and quantum models. As a result of our simulation, we show that if the quantum k-party communication complexity of a function f is $Omega(n/2^k)$, then its classical k-party communication is $Omega(n/2^{k/2})$. Finding such an f would allow us to prove strong classical lower bounds for (k>log n) players and hence resolve a main open question about symmetric circuits. Furthermore, we prove that for the Generalized Inner Product (GIP) function, the quantum model is exponentially more efficient than the classical one. This provides the first exponential separation for a total function between any quantum and public coin randomized communication model.

The first part of the abstract refers to one of the grand hopes of the field of communication complexity: to prove that a function cannot be computed with ACC circuits (constant depth, polynomial size, circuits with unbounded fan-in, and mod gates.) It turns out that finding such an f corresponds to proving a superlogarithmlic lower bound for a k-party communication complexity problem. What Iordanis does is show an explicit connection between lower bound between classical multiparty communication complexity results and quantum multiplarty communication complexity. The hope here is then that the proofs in the quantum world will turn out to be simpler than in the classical world, where all attempts to get functions out of ACC have failed. Did we ever imagine there would be a day when the quantum world was considered simpler than the classical world? Yeah for the new generation.
In the second part of the paper, Iordanis shows an exponential separation between classical communication complexity and quantum communication for a total function. A total function is a function which is defined over all of it’s inputs. Previously all the functions for which there was an exponential separation were promise problems or relations. This is quite cool for the world of communication complexity. This follows earlier work by Iordanis and coworkers Ziv Bar-Yossef and T.S. Jayram where they were able to show an exponential quantum-classical separation for a communication complexity problem which used only one way communication. Quite a roll to be on!

FOCS Time

The 46th Annual IEEE Symposium on Foundations of Computer Science (a.k.a. FOCS) submission deadline was last Friday. Not surprisingly this means that there are more than the average number of interesting papers being posted on the Tuesday listing of the arXiv.
Of course, I will start by doing a little shameless self-promotion. Here is the latest and greatest from myself, Andrew Childs (Caltech) and Wim van Dam (UCSB) (everyone do the new paper dance) quant-ph/0504083:

From optimal measurement to efficient quantum algorithms for the hidden subgroup problem over semidirect product groups
Dave Bacon, Andrew Childs, and Wim van Dam
We approach the hidden subgroup problem by performing the so-called pretty good measurement on hidden subgroup states. For various groups that can be expressed as the semidirect product of an abelian group and a cyclic group, we show that the pretty good measurement is optimal and that its probability of success and unitary implementation are closely related to an average-case algebraic problem. By solving this problem, we find efficient quantum algorithms for a number of nonabelian hidden subgroup problems, including some for which no efficient algorithm was previously known: certain metacyclic groups as well as all groups of the form (Z_p)^r X| Z_p for fixed r (including the Heisenberg group, r=2). In particular, our results show that entangled measurements across multiple copies of hidden subgroup states can be useful for efficiently solving the nonabelian HSP.

The nonabelian hidden subgroup problem is one of the most frustratingly well studied problems in the theory of quantum algorithms. This is because an efficient algorithm for the hidden subgroup problem over the symmetric group would lead to an efficient algorithm for the graph isomorphism problem (the graph isomorphism problem is telling if two graphs that I present to you by their adjacency matrices are really the same graph under some relabeling of the vertices.) To this end, there are many positive and many negative results about this problem. One thing we do know is that the query complexity of the problem is efficient. We just don’t know how to turn this into efficient algorithms. In our paper we show, in particular for certain groups which include the Heissenberg group, that there are multi-query quantum algorithms which make explicit use of the numerous queries to produce an efficient quantum algorithm. Certainly I take this as evidence that there is much left to be understood about how exactly we can achieve exponential speedups of quantum over classical computers.
Another interesting component to this algorithm is that it shows how an efficient quantum algorithm for the hidden subgroup problems we consider can be achieved by producing an efficient quantum algorithm for certain algebraic problems. This was also true for the case of the dihedral hidden subgroup problem, where, in that case, the algebraic problem turned out to be the subset sum problem. Since this problem is NP-compete, this didn’t lead to an efficient algorithm. But for the groups we consider in our new paper, the problem is efficiently solvable classically, and we are able to bootstrap this onto a quantum algorithm to achieve the speedup.
Another interesting, and closely related paper, is the paper by Cris Moore (University of New Mexico…no “h!”) and Alex Russell (University of Connecticut), quant-ph/0504067:

Bratteli Diagrams and Subexponential Time Quantum Algorithms for Hidden Subgroup Problems: or, Fourier Sampling Strikes Back
Cristopher Moore and Alexander Russell
We present an explicit measurement in the Fourier basis that solves an important case of the Hidden Subgroup Problem, including the case to which Graph Isomorphism reduces. This entangled measurement uses $k=log_2 |G|$ registers, and each of the $2^k$ subsets of the registers contributes some information. We then give a general framework using the Bratteli diagram of the group for constructing worst-case to average-case reductions from the HSP to generalized Subset Sum problems analogous to those of Regev for the dihedral group. As a result, we obtain subexponential-time quantum algorithms for the hidden subgroup problem on a number of new group families, including the Heisenberg and affine groups. Our framework also yields subexponential-time algorithms for finding the order of hidden subgroups in all solvable groups of low degree, including all nilpotent groups.

Here, Cris and Alex present a large number of subexponential time algorithms for numerous groups (including the Heisenberg group for which were able to find a polynomial time algorithm.) They are able to do this by showing how these algorithms can arise from the classical solution to a problem which arises from the representation theory of these groups. Again, what is essential in these algorithms is the use of multiple quantum queries.
As you can see, there seems to be quite a revival going on in research on the hidden subgroup problem. There is a lot of structure which many of us are starting to see! Fun times.
Along different lines, Ran Raz (Weizmann Institute) also has a very intriguing paper, quant-ph/0504075:

Quantum Information and the PCP Theorem
Ran Raz
We show how to encode $2^n$ (classical) bits $a_1,…,a_{2^n}$ by a single quantum state $|Psi>$ of size O(n) qubits, such that: for any constant $k$ and any $i_1,…,i_k in {1,…,2^n}$, the values of the bits $a_{i_1},…,a_{i_k}$ can be retrieved from $|Psi>$ by a one-round Arthur-Merlin interactive protocol of size polynomial in $n$. This shows how to go around Holevo-Nayak’s Theorem, using Arthur-Merlin proofs.
We use the new representation to prove the following results:
1) Interactive proofs with quantum advice: We show that the class $QIP/qpoly$ contains ALL languages. That is, for any language $L$ (even non-recursive), the membership $x in L$ (for $x$ of length $n$) can be proved by a polynomial-size quantum interactive proof, where the verifier is a polynomial-size quantum circuit with working space initiated with some quantum state $|Psi_{L,n} >$ (depending only on $L$ and $n$). Moreover, the interactive proof that we give is of only one round, and the messages communicated are classical.
2) PCP with only one query: We show that the membership $x in SAT$ (for $x$ of length $n$) can be proved by a logarithmic-size quantum state $|Psi >$, together with a polynomial-size classical proof consisting of blocks of length $polylog(n)$ bits each, such that after measuring the state $|Psi >$ the verifier only needs to read {bf one} block of the classical proof.
While the first result is a straight forward consequence of the new representation, the second requires an additional machinery of quantum low-degree-test that may be interesting in its own right.

I haven’t had time to work through this paper in detail, but one of the main results, that QIP(2) (one round Quantum Interactive Proofs) with polynomial quantum advice (a polynomial sized quantum state) contains all languages seems to me to be very surprising. I know that previously Scott Aaronson had shown that BQP (i.e. your standard efficient quantum algorithms) with polynomial quantum advice was in PP (probabilistic polynomial time..the worst named complexity class ever!) with polynomial classical advice, but the jump to all languages seems to me to be quite a huge jump! Well I certainly have some reading to do!
Finally there is a paper by Harry Buhrman (University of Amsterdam, CWI Amsterdam), Matthias Christandl (CQC Cambridge), Patrick Hayden (McGill), Hoi-Kwong Lo (University of Toronto) and Stephanie Wehner(CWI Amsterdam), quant-ph/0504078:

On the (Im)Possibility of Quantum String Commitment
Harry Buhrman, Matthias Christandl, Patrick Hayden, Hoi-Kwong Lo, and Stephanie Wehner
Unconditionally secure non-relativistic bit commitment is known to be impossible in both the classical and quantum worlds. However, when committing to a string of n bits at once, how far can we stretch the quantum limits? We consider quantum schemes where Alice commits a string of n bits to Bob, in such a way that she can only cheat on a bits and Bob can learn at most b bits of ”information” before the reveal phase. We show a negative and a positive result, depending on how we measure Bob’s information. If we use the Holevo quantity, no good schemes exist: a+b is at least n. If, however, we use accessible information, there exists a scheme where a=4 log n+O(1) and b=4. This is classically impossible. Our protocol is not efficient, however, we also exhibit an efficient scheme when Bob’s measurement circuit is restricted to polynomial size. Our scheme also implies a protocol for n simultaneous coin flips which achieves higher entropy of the resulting string than any previously known protocol.

Bit commitment is something like the cryptographic version of a securely sealed safe. Alice and Bob begin by Alice supplying Bob data which commits Alice to a bit, but does not reveal the value of Alice’s bit to Bob. Later, when they wish to reveal the commitment, Alice sends some information to Bob such that Bob can infer what value Alice commited to. Unconditional security would mean that there is no way for Alice to change her commitment after she has communicated to Bob without Bob being able to detect the commitment. Similarly Bob should not be able to learn anything about Alice’s bit. It is known (or I should say there is a proof with what seem like reasonable foundations) that in the standard quantum information setting, unconditionally secure quantum bit commitment is not possible. But what about the problem of not committing to a bit, but committing to a string? Now Alice can cheat on some bits, and Bob can learn some bits, but what exactly is the relationship between these two quantities? Interestingly the authors show that in the case where the measure of information about how much Bob learns is the accessible information there are quantum schemes which are impossible classically. Nothing like a result showing that quantum information kicks classical information’s rear end!
So, by my count, on Tuesday we had two algorithms papers demonstrating new speedups, a complexity theory paper demonstrating the amazing power of quantum advice for quantum interactive proofs, and a cryptographic paper demonstrating a new sepeartion between quantum and classical cryptography. Not a bad day for the arXivs!

Snarky Mode On

Robert Laughlin, Nobel prize winner for the theory behind the fractional quantum Hall effect, has a new book out, “A Different Universe.” The book is interesting, but it also has its problems. As you might guess from previous posts I’ve made, Professor Laughlin has a amazing view of quantum computers:

There is a great deal of interest lately in the quantum computer, a fundamentally new kind of computational hardware that would exploit the entanglement of the quantum wave function to perform calculations presently impossible with conventional computers. The most important of these is the generation of enormous primes numbers and the quick factorization of other enormous numbers. The impossibility of factoring a number that is the product of two large primes in reasonable time with conventional computers is the basis of modern cryptography. However, quantum computation has a terrible Achilles heel that becomes clear when one confronts the problem of reading out the answer: the effects that distinguish quantum computers from conventional ones also cause quantum indeterminism. Quantum-mechanical wave functions do indeed evolve deterministically, but the process of turning them into signals people can read generates errors. Computers that make mistakes are not very useful, so the design issue in quantum computation that counts is overcoming mistakes by measurement. A textbook method for doing this is to place a million copies of the same experiment in a small box and measure something they do collectively-generating oscillating magnetic fields, for example, as occurs in a quantum computer built with electron spins. The damage inflicted by the measurement process then affects only a few copies, leaving the rest intact. This trick is so powerful that variations of it enable you to read out the entire wave function of any quantum computer, at least in principle. However a logical implication is that you have created not a fabulous new kind of digital computer but a conventional analogue computer-a type of machine we do not use in the modern era because it is so easily disrupted by noise. Thus the frenzy over quantum computing misses the key point that the physical basis of computational reliability is emergent Newtonianness. One can imagine doing a computation without exploiting these principles, just as one can imagine proving by brute force that broken symmetry occurs, but a much more likely outcome is that eliminating computational mistakes will prove to be fundamentally impossible because its physical basis is absent. The view that this problem is trivial is a fantasy spun out of reductionist beliefs. Naturally, I hope I am wrong, and I wish those who invest in quantum computing the best of luck. I also ask that anyone anxious to invest in a bridge in lower Manhattan contact me right away, for there will be discounts for a limited time only.

Wow. Can I really read this and not respond? I just can’t resist. And especially I just can’t resist snarking. I should apologize for what I’m about to write, but my feeble mind just can’t take it. I just can’t take it anymore! So here are my suggestions for Laureate Laughlin:

1. Please read J. von Neumann’s, “Probabilistic Logics and the Synthesis of Reliable Organism from Unreliable Components.” (1956) The basis of computer reliability has absolutely nothing to do with “Newtonianness”. The basis of conventional computer reliability has to do with redudancy, and more physically with the thermodynamics of many condensed matter systems.
2. After you’ve mastered the most fundamental ideas of fault tolerance, it might be useful to understand the ideas behind error correction. Please read C. Shannon’s “A Mathematical Theory of Communication” (1948). Sure we are going backwards in time, but I think it’s important for you to realize that redundancy (“place a million copies”) is not the only way to encode information. Indeed this fact will become very important as we move on to step 3.
3. Now you’re ready for the big stuff. I know you know quantum theory like the back of your hand, so this step will be easier for you than for many others. Please read John Preskill’s “Fault-tolerant Quantum Computation.” See how the previous two ideas, when slightly modified for the quantum world, lead to a theory of fault-tolerant quantum computers. Isn’t that amazing? I consider it to be one of the most important results in physics in the last thirty years, but you’re older and wiser, so you may feel free to downgrade it. But please don’t desecrate what you haven’t had the time to understand. Quantum error correcting degrees of freedom are most distinctively not the simplistic collective degrees of freedom which you insinuate (“oscillating magnetic fields.”) The idea is more subtle, and thus, I believe more beautiful. While beauty is in the eye of the beholder, you must surely admit that your solution to the fractional quantum Hall effect is only beautiful when you have the background to understand the theory, and so too, quantum fault-tolerance is beautiful, but only once you’ve sat down and understood the theory.
Oh, and by the way, the generation of large prime numbers is easy, not hard, for conventional computers (but you did better than the string theorist Michio Kaku, who I once saw on T.V. claim that quantum computers were amazing because they could efficiently multiply large numbers.)

Quantum Jobs

Todd Brun has put together a list of faculty openings and postdoctoral positions available in quantum information processing. See this page. In a related note, but not yet listed on Todd’s website, Australia continues its battle with Canada for the center of the quantum computing universe with various job openings listed on Michael Nielsen’s blog here, here, and short visiting positions here.

QCSS05

The dealine (March 15, 2005) for the Summer School on Principles and Applications of Control in Quantum Systems to be held at Caltech on August 7-14, 2005 is fast approaching. If you’ve ever wanted to learn how to apply control theory to quantum systems, this looks like an amazing opportunity. The potential agenda includes:

  1. Experiments and applications for control in quantum systems – phenomenology and motivations for control
    • Applications of optimal, relaxation-optimized , and ensemble control in magnetic resonance
    • Quantum feedback control in atomic systems: applications to precision measurement
    • Quantum control applications in quantum information science
    • Quantum dynamics of superconducting circuits and circuit quantum electrodynamics
    • Quantum measurement and feedback with nano-electromechanical systems
  2. Quantum-physical modeling
    • Quantum mechanics in the Schrödinger, Heisenberg and Interaction pictures
    • Perturbation theory and master equations
    • Quantum probability and filtering
  3. Control theory: from classical to quantum
    • State-space modeling; introduction to optimal and robust control
    • Geometric control: overview and highlights
    • Stochastic control: overview and highlights
    • Control-theoretic model reduction
  4. Frontiers in quantum control
    • Presentations on latest research by leading practitioners in the field

Optimality Feels Good

New paper! New paper! Let’s all do the new paper dance. Posted to the arxiv today: quant-ph 0503047. This paper is a revision of an earlier paper, where now it is shown that the protocol in the earlier paper is in fact optimal.
Optimal classical-communication-assisted local model of n-qubit Greenberger-Horne-Zeilinger correlations
Authors: Tracey E. Tessier, Carlton M. Caves, Ivan H. Deutsch, Dave Bacon, Bryan Eastin
Comments: This submission supersedes quant-ph/0407133. It is a substantially revised version of the previous document and now includes an optimality proof of our model

We present a model, motivated by the criterion of reality put forward by Einstein, Podolsky, and Rosen and supplemented by classical communication, which correctly reproduces the quantum-mechanical predictions for measurements of all products of Pauli operators on an n-qubit GHZ state (or "cat state"). The n-2 bits employed by our model is shown to be optimal for the allowed set of measurements, demonstrating that the required communication overhead scales linearly with n. We formulate a connection between the generation of the local values utilized by our model and the stabilizer formalism, which leads us to conjecture that a generalization of this method will shed light on the content of the Gottesman-Knill theorem.