Warning for typical QIP attendees —
This first talk may have some implications for the real world 🙂
Andrea Mari, Vittorio Giovannetti, Alexander S. Holevo, R. Garcia-Patron and N. J. Cerf.
Majorization and entropy at the output of bosonic Gaussian channels.
Previous Title: Quantum state majorization at the output of bosonic Gaussian channels (Plenary Talk)
abstract arXiv:1312.3545
The solution of the Gaussian optimizer conjecture by Giovannetti, Holevo, and Garcia-Patron has led to enormous progress on a web of interconnected connected conjectures about Gaussian channels. This includes the phase space majorization conjecture, the additivity and minimum output Renyi entropies conjectures, strong-converse theorems for the classical capacity of gaussian channels, the entropy-power inequality conjecture, and more! The only remaining open questions are the entropy-photon number inequality and the quantum capacity.
The proof uses a decomposition of Gaussian channels. Every phase-insensitive Gaussian channel is equivalent to a quantum-limited attenuator followed by a quantum-limited amplifier. Every phase-contravariant Gaussian channel is equivalent to a quantum-limited attenuator followed by a quantum-limited amplifier followed by a noiseless phase conjugation.
It also uses an old result about quantum beam splitters. How can the output of a beamsplitter be distinguished from two independent broadcasts of the same source? Answer: only if the broadcast was a coherent state. Therefore we have the “golden property” of a quantum beam splitter: the only input states producing pure output states for a quantum limited attenuator are coherent states.
The main result of this talk is a proof of the majorization conjecture. This addresses the question: what is the minimum noise or disorder achievable at the output state, optimizing over all possible input states? The output from a coherent input state majorizes all other output states. Namely, if $latex \Phi$ is a Gaussian channel, then
$latex \Phi(\lvert\alpha\rangle\!\langle\alpha\rvert) \succ \Phi(\rho)$ for all $latex \rho, \lvert\alpha\rangle$.
The proof uses the decomposition results mentioned above, concavity of entropy, and the breakthrough results of Giovannetti, Holevo, and Garcia-Patron. A similar result holds for any Renyi $latex p$ entropy for $latex p > 1$. There is also a phase space majorization result.
Here is an awesome summary of what this research project has accomplished.
Runyao Duan and Andreas Winter. No-Signalling Assisted Zero-Error Capacity of Quantum Channels and an Information Theoretic Interpretation of the Lovasz Number. Previous title: Zero-Error Classical Channel Capacity and Simulation Cost Assisted by Quantum No-Signalling Correlations
abstract arXiv:1409.3426
Coming the morning after the conference dinner, Andreas beings with a “hangover summary” of the main results:
- $latex C_0(G) \le \log \vartheta(G)$ (sometimes known to be strict)
- $latex C_{0,E}(G) \leq \log \vartheta(G)$
- $latex C_{0,NS}(G) = \log \vartheta(G)$
On the LHS we have the asymptotic zero-error capacity of a channel, assisted either by nothing ($latex C_0$), entanglement ($latex C_{0,E}$) or no-signalling correlations ($latex C_{0,NS}$). On the RHS we have the Lovasz theta number (see below) which can be calculated efficiently using semidefinite programming.
Classical zero-error capacity depends only on the transition graph, not the probabilities. A useful notion is that of confusability graph, which is an undirected graph on the input symbols of the channel where two symbols are confusable if there is a nonzero probability of confusing them. This encodes the combinatorics of the channel in a natural way. For example, product channels have a product confusability graph whose adjacency matrix is the tensor product of the confusability graphs of the factor channels. (More precisely $latex I+ A(N\times N’) = (I+A(N)) \otimes (I + A(N’))$.)
The largest codebook for this channel is given by the independence number of the confusability graph; this gives the optimal zero-error capacity. It’s unfortunately NP-complete to compute this property of a graph, and even approximating it is NP hard. However, a seminal result due to Lovasz gives a convex relaxation of this into a semidefinite program, the Lovasz theta number.
$latex \alpha(G) \leq \vartheta(G) = \max \mathrm{Tr}(BJ) \mbox{ s.t. } B\succeq 0,\ \mathrm{Tr} B=1,\ B_{xy} = 0 \mbox{ for all } x,y \in G$
Another useful bound is the so-called fractional packing number, obtained by writing down the natural integer program for independence number and replacing the $latex w_x\in \{0,1\}$ constraint with $latex 0\leq w_x \leq 1$, then observing that the upper bound is superlative:
$latex \leq \alpha^*(\Gamma) = \max_x w_x \mbox{ s.t. } w_x \geq 0, \mbox{ and } \forall y \sum_x \Gamma(y|x) w_x \leq 1$.
Minimizing over $latex \Gamma$ consistent with $latex G$ gives $latex \alpha^*(G) = \min_{\Gamma\sim G} \alpha^*(\Gamma)$.
Crucially, both the Lovasz theta number and the fractional packing number are multiplicative, and therefore they bound not only the independence number, but also the capacity. Andreas said something surprising: If $latex f(G)$ is multiplicative and satisfies $latex \alpha(G) \leq f(G) \leq \vartheta(G)$, then we must have $latex f(G) = \vartheta(G)$! This is unpublished still, but follows from the beautiful formula [personal communication from Winter, via my memory, so hopefully not too wrong]:
$latex \vartheta(G) = \sup_H \frac{\alpha(G\times H)}{\vartheta(H)}$.
Compare with a similar formula for the fractional packing number:
$latex \alpha^*(G) = \sup_H \frac{\alpha(G\times H)}{\alpha(H)}$.
An example, in fact the usual example, is the “typewriter graph”. This has an input $latex x\in\mathbb{Z}_5$ and output $latex y \in \{x,x+1\}$. Then $latex \alpha(G) = 2$, $latex \alpha(G\times G) =5 > \alpha(G)^2$, $latex \vartheta(G) = \sqrt{5}$ (thus giving a tight upper bound on the capacity) and $latex \alpha^*(G) = 5/2$.
We would like to close the gap provided by the above relaxations by allowing additional resources in the encoding and decoding. Shannon did this in his 1956 paper by allowing feedback, and we can also add entanglement and no-signaling correlations.
The Lovasz theta number is in fact an upper bound to the entanglement-assisted zero-error capacity, which sometimes is larger than the unassisted zero-error capacity. This gives an operational (but partial) explanation of why the Lovasz theta function is not always a tight upper bound.
On the other hand, the fractional packing number is equal to the non-signalling-assisted capacity, which is an awesome and nontrivial result.
Now what about the quantum case? Here things get even cooler, although not everything is as well understood. For quantum channels, the quantum generalizations of the transition and confusability graphs are given by the following correspondences:
$latex K = \text{span}\{E_i\}$ and $latex S = K^\dag K = \text{span} \{E_i^\dag E_j\}$.
The authors have also also generalized (as in Will Matthews’ talk earlier) the notion of no-signalling operations to ones that take quantum inputs and outputs. Just as the classical no-signalling polytope is a small linear program, this quantum set has a nice SDP characterization.
Now consider $latex \Upsilon(K) = \max \sum_x s_X \mbox{ s.t. } 0 \leq R_x \leq s_x \Pi_x^\perp\sum_x (R_x + s_x \Pi_x) = I$. This is
$latex \leq A(K) = \max \sum_x s_x \mbox{ s.t. } 0 \leq s_x, \sum_x s_x \Pi_x \leq I$.
Then the “most amazing thing”, according to Andreas, is the following theorem:
$latex C_{0,NS}(K) = \lim \frac{1}{n} \log \Upsilon(K^{\otimes n}) = \log A(K)$.
Note that this is not just an inequality, but an actual equality!
They show actually that $latex \Upsilon(K^{\otimes n}) \ge n^{-c} A(K)^n$.
The proof is reminiscent of the recent and inexplicably not-invited Fawzi-Renner breakthrough on the quantum conditional mutual information.
Now in terms of $latex G$, instead of $latex K$, $latex \min A(K) = \vartheta(G)$, which is the first information-theoretic application of the Lovasz theta function.
Some of the closing remarks were that
- SDP formulas for assisted capacity and simulation cost (one shot)
- SDPs can regularize to relaxed SDPs!
- Capacity interpretation of Lovasz theta number
- Is there a gap between $latex C_{0,E}(G)$ and $latex \log\vartheta(G)$?
- Is regularization necessary?
Mario Berta, Omar Fawzi and Volkher Scholz.
Quantum-proof randomness extractors via operator space theory;
abstract arXiv:1409.3563
This talk is about producing high-quality (i.e. nearly uniform) randomness from [necessarily longer] strings that contain “lower-quality” randomness. Tools for doing this are called extractors; other closely related tools are condensers and expanders. Why would we need to do this? Well suppose we start with some uniform randomness and then leak some information about it to an adversary (Eve). Then Eve’s subjective view of our randomness is no longer uniform. Applying an extractor can make it again close to uniform. Even in a non-cryptographic setting they can be useful. Suppose we run a randomized algorithm using some large uniformly random seed. Conditioned on the output of this algorithm, the seed will no longer be uniformly random. If we want to continue using it, maybe even for later steps of a larger algorithm, we will need to clean it up using something like an extractor.
This establishes the need for extractors and their crucial property: their goal is to make a string random conditioned on some other string (e.g. Eve’s knowledge, or the earlier outputs of the randomized algorithm; call this the “side information”). This talk will consider the case when side information is in fact quantum. Conditioning on quantum things is always tricky, and one hint that the math here will be nontrivial is the fact that “operator space theory” is in the title.
One unpleasant fact about extractors is that they cannot be deterministic (this is a useful exercise for the reader). So they usually use some extra input, guaranteed to be uniform and independent of the source. The smallest possible seed size is logarithmic in the input size. Also the number of output bits can be (more or less) no larger than the min-entropy of the source. So the key parameters are the input size, the input min-entropy, the input seed size, the number of output bits and their trace distance from being uniform and independent of the side information.
One example of a nice classical result is the leftover hash lemma.
Can we still get the same result if adversary is quantum? This will have implications for privacy amplification in q crypto and also (not sure why about this one) properties of q memory.
Here min-entropy becomes the conditional min-entropy, which involves a maximization over all guessing strategies and measures knowledge of an adversary with access to a q system correlated with the source.
The main result here is a mathematical framework to study this, based on operator space theory. Why operator space? We can define C(Ext, k) to be the maximum advantage for a classical adversary against the extractor Ext on sources with k bits of min-entropy. The key insight is that this is a norm, specifically an operator norm. Sources with $latex \geq k$ bits of min-entropy correspond to probability distributions (i.e. $latex \|x\|_1 \leq 1$) with all entries $latex \leq 2^{-k}$, (i.e. $latex \|x\|_\infty \leq 2^{-k}$). The intersection of these two constraints defines a centrally symmetric convex set (if we subtract off the uniform distribution, I suppose), which can then define a norm. One other example of such a hybrid norm is the family of Sobolev norms. Now the achievable bias is like the maximum 1-norm of the output of such a vector when we act on it with the extractor Ext, which we can think of a linear map. So this is an operator norm, i.e. the norm of Ext is the max of the norm of Ext(x) divided by the norm of x, where we measure the numerator in the $latex l_1$ norm and the denominator in this funny hybrid norm.
The advantage of a quantum adversary is Q(Ext, k) which is like the cb (completely bounded) version of the above. Basically suppose that instead of the entries of x being numbers they are matrices. This is kind of like what team Madrid did with nonlocal games.
They also define an SDP relaxation which has the advantage of being efficiently computable.
It follows trivially from the definitions that
$latex C(Ext,k) \leq Q(Ext,k) \leq SDP (Ext, k)$.
The nontrivial stuff comes from the fact that the SDP relaxations are in some cases analytically tractable, e.g. showing that small-output and high input entropy extractors are quantum-proof (i.e. secure against quantum adversaries).
A nice open question: Given that there is an SDP, can we define a convergent hierarchy?
Richard Cleve, Debbie Leung, Li Liu and Chunhao Wang.
Near-linear construction of exact unitary 2-designs
abstract
The basic idea of a unitary $latex t$-design is that it is a discrete set of unitary operators that exactly reproduce the first $latex t$ moments of the Haar measure, i.e. the uniform measure on the unitary group. This construction lets us sample efficiently and still reproduce important properties of the Haar measure. But we don’t just want constructions with small cardinality, we also want them to have efficient gate complexity.
The speaker jokingly says, “unfortunately, these have a lot of applications.” This includes randomized benchmarking, decoupling and error-tolerant QKD schemes. There is another joke about how decoupling doesn’t mean breaking up (romantic) couples. But if Alice and Bob are entangled, and one passes through a decoupling map, then they end up nearly separable! That sounds like a breakup to me. Note that a weaker randomizing map would merely leave them with a data hiding state, which I suppose corresponds to the facebook status “It’s Complicated.” These jokes are still in focus groups, so please forgive them.
We would like to have unitary designs that have low gate complexity. The main result of this talk is a Clifford-only unitary 2-design that uses $latex O(n \log n \log \log n)$ gates, but it assumes an extension of the Riemann hypothesis. Without this hypothesis, they also have a non-Clifford construction with the same complexity, and a Clifford-only scheme with complexity $latex O(n \log^2 n \log \log n)$. Sampling uniformly from the Clifford group has gate complexity $latex O(n^2/\log n)$, so this is an improvement.
One can construct these 2-designs by writing gates in the form $latex S^{\otimes n} H^{\otimes n} M_r$, where the matrix $latex M_r$ has a decomposition with low complexity that is the main technical contribution of the authors. By using the result that Pauli mixing by conjugation implies unitary 2-design, one only needs to consider certain forms of the matrix $latex M_r$. Ronald de Wolf suggested that these results could be improved a bit further by using the fast integer multiplication algorithms due to FĂĽrer.
Carl Miller and Yaoyun Shi.
Universal security for randomness expansion. Previous title: Universal security for quantum contextual devices
abstract arXiv:1411.6608
Carl opens his talk by saying that this result has been a goal of the U of M group for the past four years. It then proceeds in suitably epic fashion by looking up the definition of randomness from The Urban Dictionary (highlight: “A word often misused by morons who don’t know very many other words.”) and moving onto modern cryptographic standards (skipping, though, this gem).
This talk follows the research program laid out by Colbeck in 2006, where he suggested that one might take the output of some CHSH games, verify that the winning probability is higher than the optimal classical value, and then apply a deterministic extractor. Difficulties here involve quantum side information, information locking, and the other nuisances that composable security was invented to address. Vazirani-Vidick showed in 2011 that this was possible if in the honest case the Bell inequality is violated optimally, last QIP Miller and Shi showed it worked with some not-quite-perfect value of the CHSH game and the current result extends this to any beyond-classical value of the game.
One the key technical ingredients is a new uncertainty principle.
We will not fully do it justice here. Roughly it is as follows. Define
$latex Y = \text{tr} [\rho_+^{1+\epsilon} + \rho_-^{1-\epsilon}] / \text{tr}[\rho^{1+\epsilon}]$ and $latex X = \text{tr} [\rho_0^{1+\epsilon}] / \text{tr} [\rho^{1+\epsilon}]$ where $latex \{\rho_0,\rho_1\}$ and $latex \{\rho_+,\rho_-\}$ are the post-measurement states resulting from a pair of anticommuting measurements on $latex \rho$. Given this, the Miller-Shi uncertainty principle states that the pair $latex (X,Y)$ must fit into a particular region of the plane. (see eq (2.2) of their paper for a bit more detail.)
The proof uses the “uniform convexity of the $latex S_{1+\epsilon}$ norm” due to Ball, Carlen, & Lieb (see also this note). Carl suggests there are more applications. Fernando Brandao and I [Aram] have one! Stay tuned.
At one point Carl says: “By an inductive argument (which actually takes 20 pages), we see that…”
The result can then be generalized beyond X-Z measurements to any pair of non-commuting measurements.
Now what about totally untrusted devices? These too can be forced, using a VV-like approach, to act in a way as though they are performing non-commutative measurements.
This can be generalized even further to Kochen-Specker inequalties and contextuality games, modulo some mild assumptions on how the device works.
open problems: what resources are used? This uses a linear amount of entanglement, for example. For any experimentalists watching (for whom entanglement is scarce and randomness abundant), this part of the talk must sound funny.
Neil J. Ross and Peter Selinger.
Optimal ancilla-free Clifford+T approximation of z-rotations (Plenary Talk)
abstract arXiv:1403.2975
Optimal ancilla-free Clifford+T approximation of z-rotations (Plenary Talk)
abstract arXiv:1403.2975
Solovay-Kitaev was all about geometry, but recently the field of quantum compiling has shifted to algebraic number theory, following the path-breaking 2012 paper by Kliuchnikov, Maslov, and Mosca.
Some number theory: In the ring $latex \mathbb{Z}[\sqrt{2}]$ define $latex (a+b\sqrt{2})^{\bullet} = a-b\sqrt{2}$ and the norm $latex N(a) := \sqrt{a^\bullet a}$.
Applying the bullet operation (a Galois automorphism of the number field) to a compact interval yields a discrete unbounded set. The 1-d grid problem is, given finite intervals $latex A, B \subset \mathbb{R}$, find an element of $latex A\cap B^{\bullet}$. Generically there will be $latex O(|A|\cdot |B|)$ solutions, and these are easy to find when both $latex |A|$ and $latex |B|$ are large, corresponding to a fat rectangle (when $latex \mathbb{Z}[2]$ is viewed as a 2-d lattice). When we have a long thin rectangle we can convert it into this case by rescaling.
The 2-d grid problem: Now we consider $latex \mathbb{Z}[\omega]$ for $latex \omega=\exp(2\pi i /8)$. This turns into a more complicated set intersection problem on a 2-d lattice whose details we omit. But just like the rescaling used for 1-d, now we use a more complicated set of shearing operators to transform the problem into one where the solutions are easier to find. But we need an extra property: the “uprightness” of the rectangles.
Main Theorem: Let A and B be two convex sets with non-empty interior. There is is a grid operator $latex G$ such that both $latex G(A)$ and $latex G^\bullet(B)$ are both 1/6 upright. Moreover, it can be efficiently computed.
Theorem [Matsumoto-Amano ’08]
Every Clifford + T single-qubit operator W can be uniquely written in the form
W = (T | eps) (HT | SHT)* C,
where C is a Clifford.
Exact synthesis of the Clifford+T operators was solved by Kliuchinikov, Maslov, and Mosca. If You have $latex W$ a 2-by-2 unitary operator, then it is Clifford+T iff the matrix elements are all of the form $latex \frac{1}{2^{k/2}} \mathbb{Z}[\omega])$ where $latex \omega = \sqrt{i}$. Moreover, if $latex \det W = 1$, then the T-count of the resulting operator is equal to $latex 2k-2$.
The upshot of this is that if you have a factoring oracle then the algorithm gives circuits of optimal T-count. In the absence of such an oracle, then this returns a nearly optimal T-count, namely the second-to-optimal T-count $latex m$ plus a term of order $latex O(\log\log 1/\epsilon)$.
Adam Bouland and Scott Aaronson.
Generation of Universal Linear Optics by Any Beamsplitter
abstract arXiv:1310.6718
Are there any interesting sets of beamsplitter-type quantum gates that don’t generate either only permutation matrices or arbitrary unitaries? The main result is that if one has a two-level unitary of determinant 1 and with all entries non-zero, then it densely generates SU(m) or SO(m) for $latex m \ge 3$. In other words, any beamsplitter that mixes modes is universal for three or more modes. Another way to say this is that for any beamsplitter $latex b$, either $latex b$ is efficiently classically simulable or else it is universal for quantum optics. This is a nice dichotomy theorem. The proof requires the classification of the finite subgroups of SU(3), which was, surprisingly, completed only in 2013. (It was written down in 1917 as “an exercise for the reader” but educational standards have apparently come down since then.)
Isaac Kim.
On the informational completeness of local observables
abstract arXiv:1405.0137
2014 has seen some of the most exciting news for quantum conditional mutual information since strong subadditivity.
The goal of this talk is to try to find a large class of interesting states $latex S$ such that quantum state tomography and quantum state verification can be done efficiently. We would like to also have that if a state is in $latex S$ then we can efficiently verify that fact.
At one point in the intro, Isaac says that a point is “…for the experts, err, for the physicists.” Then he remembers who his audience is. Later he says “for this audience, I don’t have to apologize for the term ‘quantum conditional mutual information'”.
This talk proposes a class $latex S_n$ that has this verification property in time $latex O(n)$ where $latex n$ is the number of particles. Any state in $latex S_n$ is defined by a set of O(1)-particle local reduced density operators.
Isaac first reviews the approach to matrix product state tomography, which uses the parent Hamiltonian of a matrix product state to show that any (injective) MPS is completely determined by a local set of observables. This result is a generalization of the MPS tomography result to higher dimensions, but with the important difference that there is no need for a global wavefunction at all!
The main result, roughly speaking, is that there exists a certificate $latex \epsilon(\{\rho_k \mathcal{N}_k\})$ such that
$latex |\rho – \rho’|_1 \le \epsilon(\{\rho_k \mathcal{N}_k\})$.
Local density matrices can (approximately) reconstruct the global state. Previously this was done assuming that the global state is a low-bond-dimension MPS. But that’s in a sense circular because we have to make a (strong) assumption about the global state. This work allows us to do this purely using local observables. Awesome.
The talk proceeds by giving a reconstruction procedure by which a global state $latex \rho’$ can be reconstructed from the marginals of a state $latex \rho$, and by showing conditions under which $latex \rho\approx \rho’$. The global state is completely determined by the local reduced density operators if the conditional quantum mutual information is zero.
Theorem 1: If $latex \rho_{AB} = \sigma_{AB}$ and $latex \rho_{BC} = \sigma_{BC}$ then
$latex \frac{1}{4} \|\rho_{ABC} – \sigma_{ABC}\|_1^2 \leq I(A:C|B)_\rho + I(A:C|B)_\sigma$
This is nice, but the upper bound depends on global properties of the state. (Here we think of AB and BC as “local” and ABC as “global.” Don’t worry, this will scale well to $latex n$ qubits.)
To proceed we would like to upper bound the QCMI (quantum conditional mutual information) in terms of locally observable quantities. This is a straightforward but clever consequence of strong subadditivity. Specifically
$latex I(A:C|B) \leq S(CF) – S(F) + S(BC) – S(B)$
for any system $latex F$.
For judicious choice of regions, and assuming a strong form of the area law conjecture ($latex S(A) = a |\partial A| – \gamma + o(1)$ in 2-d), this bound is effective.
We can then build up our reconstruction qubit-by-qubit until we have covered the entire system.
Applications:
- quantum state tomography: Measure local observables and check the locally computable upper bound. If the latter is small, then the former contains sufficient information to reconstruct the global wavefunction.
- quantum state verification: Similar except we want to just check whether our physical state is close to a desired one
Bartek Czech, Patrick Hayden, Nima Lashkari and Brian Swingle.
The information theoretic interpretation of the length of a curve
abstract arXiv:1410.1540
At ultra-high energies, we need a theory of quantum gravity to accurately describe physics. As we move down in energy scales, vanilla quantum field theory and eventually condensed-matter physics become relevant. Quantum information theory comes into play in all three of these arenas!
One of the principal claims of this talk is that quantum information is going to be central to understanding holography, which is the key concept underlying, e.g., the AdS/CFT correspondence. Quantum gravity theories in negatively curved spacetimes (saddle-like geometries) seem to have a duality: one Hilbert space has two theories. The duality dictionary helps us compute quantities of interest in one theory and map the solutions back to the other theory. It turns out that these dictionaries often involve computing entropic quantities.
A funny feature of AdS space is that light can reach infinity and bounce back in finite time! This acts just like boundary conditions (at infinity) and simplifies certain things. Inside the bulk of AdS space, geodesics are (in the Poincare disk model) just circular arcs that meet the boundary at right angles. When there is matter in the bulk, these get deformed. We’d love to be able to interpret these geometric quantities in the bulk back in the CFT on the boundary.
The Ryu-Takayanagi correspondence links the length of a geodesic in the bulk with the entanglement entropy in the CFT on the boundary. This can be related (using e.g. work by Calabrese & Cardy) to the central charge of the CFT. A really neat application of this correspondence is a very simple and completely geometric proof of strong subadditivity due to Headrik and Takayanagi (with the important caveat that it applies only to a restricted class of states, namely those states of CFTs that have gravity duals).
The new result here is to generalize the Ryu-Takayanagi correspondence to general curves, not just geodesics. The main result: there is a correspondence between the length of convex curves in the bulk and the optimal entanglement cost of streaming teleportation communication tasks that Alice and Bob perform on the boundary.
Thank you Steve, for this fine live-blogging, which is valuable to everyone who reads it (including me). On Scott Aaronson’s weblog I have posted an appreciation of the Pontifical live-blogging, in particular the review of Isaac Kim’s “On the informational completeness of local observables” (arXiv:1405.0137).
It would be terrific (as it seems to me) if the large-and-lively readership of Shtetl Optimized were better tuned-in to the deep-and-broad topics of The Quantum Pontiff. Please allow me to express my appreciation of and gratitude toward both forums.
Thank you, Steve (especially) for your outstanding personal commitment and sustained professional efforts in so ably live-blogging QIP 2015.
Thanks for the recognition, but Charlie and especially Aram have been helping as well! I just posted this day’s live-blog, but it was all three of us that co-wrote it.
BTW, day 5 is coming soon… we just have to clean up the latex a bit. We’ll also write about the business meeting.
Thank you Steve; a four-fold appreciation of the Pontiffs now appears as a comment on Shtetl Optimized. Coverage of the final day of QIP 2015 — in particular the closing remarks and any concluding observations by the Pontiffs — would be appreciated gratefully by many (including me).
Thanks Aram for blogging about the Miller-Shi result. The “uniform convexity” is very useful and I look forward to learning about your new application with Fernando. Just to respond to your comment on the open problem about entanglement.If the entanglement would have to be prepared in advanced then it’d indeed be not so encouraging. But in the protocol we analyzed, the device components can exchange information in-between rounds of games. Thus they need only to store a single qubit, long enough for playing one game. The open problem would make sense only when we forbid such kind of communication. I have to say I’m not completely sure about its real world relevance (perhaps some experimentalists could chime in), but perhaps in some hopefully not so distant future we know better how to store entanglement for a longer time and long-distance communication remains difficult (or in some paranoid situations that it wouldn’t be possible). Then we have good reason to minimize entanglement without communication. But in any case, it is a great theoretical question addressing what quantity determines the extractible bits in an untrusted quantum device.