Jan 14 update at the end.
The three Pontiffs are reunited at QIP 2015 and, having forgotten how painful liveblogging was in the past, are doing it again. This time we will aim for some slightly more selective comments.
In an ideal world the QIP PC would have written these sorts of summaries and posted them on scirate, but instead they are posted on easychair where most of you can’t access them. Sorry about this! We will argue at the business meeting for a more open refereeing process.
The first plenary talk was:
Ran Raz (Weizmann Institute)
How to Delegate Computations: The Power of No-Signaling Proofs
TR13-183
Why is the set of no-signalling distributions worth looking at? (That is, the set of conditional probability distributions $latex p(a,b|x,y)$ that have well-defined marginals $latex p(a|x)$ and $latex p(b|y)$.) One way to think about it is as a relaxation of the set of “quantum” distributions, meaning the input-output distributions that are compatible with entangled states. The no-signalling polytope is defined by a polynomial number of linear constraints, and so is the sort of relaxation that is amenable to linear programming, whereas we don’t even know whether the quantum value of a game is computable. But is the no-signalling condition ever interesting in itself?
Raz and his coauthors (Yael Kalai and Ron Rothblum) prove a major result (which we’ll get to below) about the computational power of multi-prover proof systems where the provers have access to arbitrary non-signalling distributions. But they began by trying to prove an apparently unrelated classical crypto result. In general, multiple provers are stronger than one prover. Classically we have MIP=NEXP and IP=PSPACE, and in fact that MIP protocol just requires one round, whereas k rounds with a single prover is (roughly) within the k’th level of the polynomial hierarchy (i.e. even below PSPACE). So simulating many provers with one prover seems in general crazy.
But suppose instead the provers are computationally limited. Suppose they are strong enough for the problem to be interesting (i.e. they are much stronger than the verifier, so it is worthwhile for the verifier to delegate some nontrivial computation to them) but to weak to break some FHE (fully homomorphic encryption) scheme. This requires computational assumptions, but nothing too outlandish. Then the situation might be very different. If the verifier sends its queries using FHE, then one prover might simulate many provers without compromising security. This was the intuition of a paper from 2000, which Raz and coauthors finally are able to prove. The catch is that even though the single prover can’t break the FHE, it can let its simulated provers play according to a no-signalling distribution. (Or at least this possibility cannot be ruled out.) So proving the security of 1-prover delegated computation requires not only the computational assumptions used for FHE, but also a multi-prover proof system that is secure against no-signalling distributions.
Via this route, Raz and coauthors found themselves in QIP territory. When they started it was known that
- MIPns[2 provers]=PSPACE [0908.2363]
- PSPACE $latex \subseteq$ MIPns[poly provers] $latex \subseteq$ EXP [0810.0693]
This work nails down the complexity of the many-prover setting, showing that EXP is contained in MIPns[poly provers], so that in fact that classes are equal.
It is a nice open question whether the same is true for a constant number of provers, say 3. By comparison, three entangled provers or two classical provers are strong enough to contain NEXP.
One beautiful consequence is that optimizing a linear function over the no-signalling polytope is roughly a P-complete problem. Previously it was known that linear programming was P-complete, meaning that it was unlikely to be solvable in, say, log space. But this work shows that this is true even if the constraints are fixed once and for all, and only the objective function is varied. (And we allow error.) This is established in a recent followup paper [ECCC TR14-170] by two of the same authors.
Francois Le Gall.
Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments
abstract arXiv:1407.0085
A technical tour-de-force that we will not do justice to here. One intriguing barrier-breaking aspect of the work is that all previous algorithms for triangle finding worked equally well for the standard unweighted case as well as a weighted variant in which each edge is labeled by a number and the goal is to find a set of edges $latex (a,b), (b,c), (c,a)$ whose weights add up to a particular target. Indeed this algorithm has a query complexity for the unweighted case that is known to be impossible for the weighted version. A related point is that this shows the limitations of the otherwise versatile non-adaptive learning-graph method.
Ryan O’Donnell and John Wright
Quantum Spectrum Testing
abstract arXiv:1501.05028
A classic problem: given $latex \rho^{\otimes n}$ for $latex \rho$ an unknown d-dimensional state, estimate some property of $latex \rho$. One problem where the answer is still shockingly unknown is to estimate $latex \hat\rho$ in a way that achieves $latex \mathbb{E} \|\rho-\hat \rho\|_1 \leq\epsilon$.
Results from compressed sensing show that $latex n = \tilde\Theta(d^2r^2)$ for single-copy two-outcome measurements of rank-$latex r$ states with constant error, but if we allow block measurements then maybe we can do better. Perhaps $latex O(d^2/\epsilon)$ is possible using using the Local Asymptotic Normality results of Guta and Kahn [0804.3876], as Hayashi has told me, but the details are – if we are feeling generous – still implicit. I hope that he, or somebody, works them out. (18 Jan update: thanks Ashley for fixing a bug in an earlier version of this.)
The current talk focuses instead on properties of the spectrum, e.g. how many copies are needed to distinguish a maximally mixed state of rank $latex r$ from one of rank $latex r+c$? The symmetry of the problem (invariant under both permutations and rotations of the form $latex U^{\otimes n}$) means that we can WLOG consider “weak Schur sampling” meaning that we measure which $latex S_n \times U_d$ irrep our state lies in, and output some function of this result. This irrep is described by an integer partition which, when normalized, is a sort of mangled estimate of the spectrum. It remains only to analyze the accuracy of this estimator in various ways. In many of the interesting cases we can say something nontrivial even if $latex n= o(d^2)$. This involves some delicate calculations using a lot of symmetric polynomials. Some of these first steps (including many of the canonical ones worked out much earlier by people like Werner) are in my paper quant-ph/0609110 with Childs and Wocjan. But the current work goes far far beyond our old paper and introduces many new tools.
Han-Hsuan Lin and Cedric Yen-Yu Lin. Upper bounds on quantum query complexity inspired by the Elitzur-Vaidman bomb tester
abstract arXiv:1410.0932
This talk considers a new model of query complexity inspired by the Elitzur-Vaidman bomb tester. The bomb tester is a classic demonstration of quantum weirdness: You have a collection of bombs that have a detonation device so sensitive that even a single photon impacting it will set it off. Some of these bombs are live and some are duds, and you’d like to know which is which. Classically, you don’t stand a chance, but quantum mechanically, you can put a photon into a beamsplitter and place the bomb in one arm of a Mach-Zender interferometer. A dud will destroy the interference effects, and a homodyne detector will always click the same way. But you have a 50/50 chance of detecting a live bomb if the other detector clicks! There are various tricks that you can play related to the quantum Zeno effect that let you do much better than this 50% success probability.
The authors define a model of query complexity where one risks explosion for some events, and they showed that the quantum query complexity is related to the bomb query complexity by $latex B(f) = \Theta(Q(f)^2)$. There were several other interesting results in this talk, but we ran out of steam as it was the last talk before lunch.
Kirsten Eisentraeger, Sean Hallgren, Alexei Kitaev and Fang Song
A quantum algorithm for computing the unit group of an arbitrary degree number field
STOC 2014
One unfortunate weakness of this work: The authors, although apparently knowledgeable about Galois theory, don’t seem to know about this link.
The unit group is a fundamental object in algebraic number theory. It comes up frequently in applications as well, and is used for fully homomorphic encryption, code obfuscation, and many other things.
My [Steve] personal way of understanding the unit group of a number field is that it is a sort of gauge group with respect to the factoring problem. The units in a ring are those numbers with multiplicative inverses. In the ring of integers, where the units are just $latex \pm1$ , we can factor composite numbers into $latex 6 = 3 \times 2 = (-3)\times (-2)$. Both of these are equally valid factorizations; they are equivalent modulo units. In more complicated settings where unique factorization fails, we have factorization into prime ideals, and the group of units can in general become infinite (though always discrete).
The main result of this talk is a quantum algorithm for finding the unit group of a number field of arbitrary degree. One of the technical problems that they had to solve to get this result was to solve the hidden subgroup problem on a continuous group, namely $latex \mathbb{R}^n$.
The speaker also announced some work in progress: a quantum algorithm for the principal ideal problem and the class group problem in arbitrary degree number fields [Biasse Song ‘14]. It sounds like not all the details of this are finished yet.
Dominic Berry, Andrew Childs and Robin Kothari
Hamiltonian simulation with nearly optimal dependence on all parameters
abstract 1501.01715
Hamiltonian simulation is not only the original killer app of quantum computers, but also a key subroutine in a large and growing number of problems. I remember thinking it was pretty slick that higher-order Trotter-Suzuki could achieve a run-time of $latex \|H\|t\text{poly}(s)(\|H\|t/\epsilon)^{o(1)}$ where $latex t$ is the time we simulate the Hamiltonian for and $latex s$ is the sparsity. I also remember believing that the known optimality thoerems for Trotter-Suzuki (sorry I can’t find the reference, but it involves decomposing $latex e^{t(A+B)}$ for the free Lie algebra generated by $latex A,B$) meant that this was essentially optimal.
Fortunately, Berry, Childs and Kothari (and in other work, Cleve) weren’t so pessimistic, and have blasted past this implicit barrier. This work synthesizes everything that comes before to achieve a run-time of $latex \tau \text{poly}\log(\tau/\epsilon)$ where $latex \tau = \|H\|_{\max}st$ (where $latex \|H\|_{\max}$ is $latex \max_{i,j} |H_{i,j}|$ can be related to the earlier bounds via $latex \|H\| \leq d \|H\|_{\max}$).
One quote I liked: “but this is just a generating function for Bessel functions!” Miraculously, Dominic makes that sound encouraging. The lesson I suppose is to find an important problem (like Hamiltonian simulation) and to approach it with courage.
Salman Beigi and Amin Gohari
Wiring of No-Signaling Boxes Expands the Hypercontractivity Ribbon
abstract arXiv:1409.3665
If you have some salt water with salt concentration 0.1% and some more with concentration 0.2%, then anything in the range [0.1, 0.2] is possible, but no amount of mixing will give you even a single drop with concentration 0.05% or 0.3%, even if you start with oceans at the initial concentrations. Similarly if Alice and Bob share an unlimited number of locally unbiased random bits with correlation $latex \eta$ they cannot produce even a single bit with correlation $latex \eta’ > \eta$ if they don’t communicate. This was famously proved by Reingold, Vadhan and Wigderson.
This talk does the same thing for no-signaling boxes. Let’s just think about noisy PR boxes to make this concrete. The exciting thing about this work is that it doesn’t just prove a no-distillation theorem but it defines an innovative new framework for doing so. The desired result feels like something from information theory, in that there is a monotonicity argument, but it needs to use quantities that do not increase with tensor product.
Here is one such quantity. Define the classical correlation measure $latex \rho(A,B) = \max \text{Cov}(f,g)$ where $latex f:A\mapsto \mathbb{R}$, $latex g:B\mapsto \mathbb{R}$ and each have variance 1. Properties:
- $latex 0 \leq \rho(A,B) \leq 1$
- $latex \rho(A,B) =0$ iff $latex p_{AB} = p_A \cdot p_B$
- $latex \rho(A^n, B^n) = \rho(A,B)$
- for any no-signaling box, $latex \rho(A,B) \leq \max(\rho(A,B|X,Y), \rho(X,Y))$
Together this shows that any wiring of boxes cannot increase this quantity.
The proof of this involves a more sophisticated correlation measure that is not just a single number but is a region called the hypercontractivity ribbon (originally due to [Ahlswede, Gacs ‘76]). This is defined to be the set of $latex (\lambda_1, \lambda_2)$ such that for any $latex f,g$ we have
$latex \mathbb{E}[f_A g_B] \leq \|f_A\|_{\frac{1}{\lambda_1}} \|g_B\|_{\frac{1}{\lambda_2}}$
A remarkable result of [Nair ‘14] is that this is equivalent to the condition that
$latex I(U;AB) \geq \lambda_1 I(U:A) + \lambda_2 I(U:B)$
for any extension of the distribution on AB to one on ABU.
Some properties.
- The ribbon is $latex [0,1]\times [0,1]$ iff A,B are independent.
- It is stable under tensor power.
- monotonicity: local operations on A,B enlarge $latex R$
For boxes define $latex R(A,B|X,Y) = \cap_{x,y} R(A,B|x,y)$. The main theorem is then that rewiring never shrinks hypercontractivity ribbon. And as a result, PR box noise cannot be reduced.
These techniques are beautiful and seem as though they should have further application.
Masahito Hayashi
Estimation of group action with energy constraint
abstract arXiv:1209.3463
Your humble bloggers were at this point also facing an energy constraint which limited our ability to estimate what happened. The setting is that you pick a state, nature applies a unitary (specifically from a group representation) and then you pick a measurement and try to minimize the expected error in estimating the group element corresponding to what nature did. The upshot is that entanglement seems to give a quadratic improvement in metrology. Noise (generally) destroys this. This talk showed that a natural energy constraint on the input also destroys this. One interesting question from Andreas Winter was about what happens when energy constraints are applied also to the measurement, along the lines of 1211.2101 by Navascues and Popescu.
Jan 14 update: forgot one! Sorry Ashley.
Ashley Montanaro
Quantum pattern matching fast on average
abstract
arXiv:1408.1816
Continuing the theme of producing shocking and sometimes superpolynomial speedups to average-case problems, Ashley shows that finding a random pattern of length $latex m$ in a random text of length $latex n$ can be done in quantum time $latex \tilde O(\sqrt{n/m}\exp(\sqrt{\log m}))$. Here “random” means something subtle. The text is uniformly random and the pattern is either uniformly random (in the “no” case) or is a random substring of the text (in the “yes” case). There is also a higher-dimensional generalization of the result.
One exciting thing about this is that it is a fairly natural application of Kuperberg’s algorithm for the dihedral-group HSP; in fact the first such application, although Kuperberg’s original paper does mention a much less natural such variant. (correction: not really the first – see Andrew’s comment below.)
It is interesting to think about this result in the context of the general question about quantum speedups for promise problems. It has long been known that query complexity cannot be improved by more than a polynomial (perhaps quadratic) factor for total functions. The dramatic speedups for things like the HSP, welded trees and even more contrived problems must then use the fact that they work for partial functions, and indeed even “structured” functions. Pattern matching is of course a total function, but not one that will ever be hard on average over a distribution with, say, i.i.d. inputs. Unless the pattern is somehow planted in the text, most distributions simply fail to match with overwhelming probability. It is funny that for i.i.d. bit strings this stops being true when $latex m = O(\log n)$, which is almost exactly when Ashley’s speedup becomes merely quadratic. So pattern matching is a total function whose hard distributions all look “partial” in some way, at least when quantum speedups are possible. This is somewhat vague, and it may be that some paper out there expresses the idea more clearly.
Part of the strength of this paper is then finding a problem where the promise is so natural. It gives me new hope for the future relevance of things like the HSP.
For young researchers especially, these works act ot sustain the traditional wonder of Quantum-Camelot
And these works help too, to heal the seams that — as the poet says — “this way and that divide our swift minds”
Good on `yah, Quantum Pontiffs!
Hmmmm … the first, and most important, sentence was dropped from the above post: “Thank you for this outstanding live-blogging, which is a lot of work, as everyone appreciates.” (and the gratitude expressed is entirely sincere).
Thanks, I was hoping someone would liveblog. Keep ’em coming!!!
Ashley’s result is great, but it’s not the first application of Kuperberg’s algorithm — our quantum algorithm for elliptic curve isogenies (http://arxiv.org/abs/1012.4019) also uses it.
thanks! post updated.
Thanks for live-blogging!
The reference you’re looking for regarding the t^{1+o(1)} optimality theorem for the case of Trotter-Suzuki based quantum simulation is the following Communications in Mathematical Physics paper by Berry et al. http://arxiv.org/abs/quant-ph/0508139. The authors later published a slightly improved analysis in this book chapter. It’s exciting to see this barrier blasted!