QIP 2012 Day 5

The quantum pontiff brains have reached saturation.

Eric Chitambar, Wei Cui and Hoi-Kwong Lo:
Increasing Entanglement by Separable Operations and New Monotones for W-type Entanglement

These results demonstrate a large quantitative gap between LOCC and SEP for a particular task called random EPR distillation. Therefore, SEP may not always be a good approximation for LOCC. They also demonstrate entanglement monotones that can increase under SEP, the first known examples with analytic functions. They also show that LOCC is not a closed set of operations, so optimal LOCC protocols may not exist.
Recall how LOCC works: Alice and Bob share a bipartite pure state $latex |psirangle_{AB}$. Alice makes a measurement on her system and sends some classical bits to Bob. Bob then makes a measurement on his system and sends some bits to Alice. They repeat this many times. Any LOCC operation is a collection of maps $latex {mathcal{E}_j}$ such that the sum of the maps is trace preserving and each map has a separable Kraus decomposiiton where each operator can be build from successive rounds of local measurements. This is a pretty difficult condition to study, so it is convenient to relax LOCC to SEP. For SEP, we drop the “successive rounds of local measurements” requirement on the Kraus operators. Given an arbitrary SEP, we can always implement it with LOCC if we allow for some probability of failure, i.e. SLOCC. Can we eliminate this probability of failure? Not in general. There are maps that are in SEP but not in LOCC, as first demonstrated by [Bennett et al.] (“Quantum nonlocality without entanglement”), and subsequently investigated by many other authors.
So now we know that SEP and LOCC are not equal, but how non-equal are they? That is, can we quantify the gap between the two classes? There is some previous work, such as Bennett et al., who showed that the mutual information for state discrimination had a gap of at least $latex 10^{-6}$, and work by Koashi et al. who showed that there is a gap in the success probability for unambiguously distinguishing states was at least 0.8%. These gaps look rather small, so you might plausibly conjecture that SEP is a good approximation for LOCC in general.
SEP is precisely the class of operations that cannot create entanglement out of nothing, but if we seed things with a little bit of entanglement, can we use SEP to increase the entanglement? We need to define some LOCC monotone to make this a meaning statement.
Surprisingly, yes! There are entanglement transforms that work in SEP but not LOCC, and therefore entanglement monotones (but artificial ones) that can increase under SEP but not LOCC. To give some intuition, though, here is a non-artificial task.
Random-party distillation of W-class states:
An N-partite W-class state looks (up to local unitary rotation) like
$latex |vec{x}rangle = sqrt{x_0} |00ldots 00rangle + sqrt{x_1} |10ldots 00rangle + sqrt{x_2} |01ldots 00rangle + sqrt{x_n} |00ldots 01rangle$.
It’s a good class of states to study because they are only parameterized by N real numbers (as opposed to 2N), it is easy to characterize how the states transform under local measurements, and it is closed under SLOCC. See this paper by Kintas and Turgut for a review of the properties of W-class states.
Here is a nice example, due to Fortescue and Lo. Start with a tripartite W state, and have each party perform the measurement $latex {M_0, M_1}$, with $latex M_0 = sqrt{1-epsilon}|0ranglelangle 0|+|1ranglelangle 1|$ and $latex M_1 = sqrt{epsilon}|0ranglelangle 0|$, then broadcast the result. If the outcomes are 0,0,0, then nothing happens. If the outcomes are 1,0,0 or 0,1,0 or 0,0,1, then the two parties measuring 0 are left with an EPR pair; hence this achieves random-party EPR distillation. If there are two or three $latex M_1$ outcomes, then the entanglement is lost. However, as $latex epsilonrightarrow 0$, then the probability of this happening goes to zero, while the number of rounds go up. Intriguingly, this is evidence that the set of LOCC operations is not closed (i.e. does not contain all of its limit points), but previously that was not proven. Of course, we can also generalize this to the N-partite setting.
This can be generalized by using local filtering to first (probabilistically) map a W-class state to one with $latex x_1=cdots=x_N$. In fact, the resulting probability of success is optimal (see paper), and thus this optimal probability of EPR distillation is an entanglement monotone.
The fact that they establish an entanglement monotone means they get an awesome corollary. There is a parameter $latex kappa$ which represents the success probability of distillation of an EPR pair involving alice. They give an explicit formula for it, and prove that it decreases for any measurement made by Alice that changes the state. Thus, they prove that:

LOCC is not closed!

Here is a great open question (not in the talk): Find a similar monotone that describes data hiding, for example to improve the analysis of these states.
For the multipartite setting, there are a few more ideas. There is a single-copy “combing transformation” (analogous to the one of Yang & Eisert), which transforms all the entanglement to bipartite entanglement shared with Alice. Again SEP is better than LOCC, in ways that can be quantified.
Some open problems:

  • What about the case of $latex x_0 not= 0$ W-states? They have some partial results, but it still remains open.
  • Can the gap between SEP and LOCC be increased arbitrarily?
  • Can one apply these ideas to other entanglement classes?
  • Do there exist similar phenomena in bipartite systems?
  • How much entanglement is required to implement SEP operations?

Rodrigo Gallego, Lars Erik Würflinger, Antonio Acín and Miguel Navascués:
Quantum correlations require multipartite information principles
(merged with)
An operational framework for nonlocality

Normally, we define “3-partite entanglement” to mean states that are not separable with respect to any bipartition; equivalently, they cannot be created by LOCC even if two parties get together. But this definition can be dangerous when we are considering nonlocality, as we will see below.
Nonlocality is a resource for device-independent information protocols. Define a local joint probability distribution to be one which satisfies $latex P(ab|xy) = sum_lambda p(lambda) P(a|x) P(b|y) $.
Nonlocality is often described in terms of boxes that mimic measurements on entangled states. But entanglement can also be manipulated, e.g. by LOCC. What is the analogue for boxes? Are their variants of entanglement swapping, distillation, dilution, etc? In most cases, when we make the boxes stronger, the dynamics get weaker, often becoming trivial.
They define a new class or operations called WPICC, which stands for something about rewirings and pre- and post-processing (note that non-local boxes can be rewired in ways that depend on their outputs, so their causal structure can be tricky). WPICC are the operations which map local joint probability distributions to local joint probability distributions, and shouldn’t be able to create nonlocal correlations; indeed, we can use them to define nonlocal correlations, just as entanglement is defined as the set of states that can’t be created by LOCC.
However, with these operations, operations on two parties can create tripartite nonlocality, so this approach to defining nonlocality doesn’t work.
Instead, define a box to be tripartite nonlocal if it doesn’t have a TOBL (time-ordered bilocal) structure, meaning a Markov-like condition that’s described in the paper.
Moral of the story? If you want a sensible definition of tripartite correlations, then beware of operational definitions analogous to LOCC, and focus on mathematical ones, analogous to SEP.

Martin Schwarz, Kristan Temme, Frank Verstraete, Toby Cubitt and David Perez-Garcia:
Preparing projected entangled pair states on a quantum computer

There are lots of families of states which seem useful for giving short classical descriptions of highly entangled quantum states. For example, matrix product states are provably useful for describing the ground states of gapped one-dimensional quantum systems. More generally there are other classes of tensor network states, but their utility is less well understood. A prominent family of states for trying to describe ground states of 2d quantum systems is projected entangled pair states (PEPS). These are just tensor network states with a 2d grid topology. The intuition is that the structure of correlations in the PEPS should mimic the correlation in the ground state of a gapped system in 2d, so they might be a good ansatz class for variational methods.
If you could prepare an arbitrary PEPS, what might you be able to do with that? Schuch et al. proved that an oracle that could prepare an arbitrary PEPS would allow you to solve a PP complete problem. We really can’t hope to do this efficiently, so it begs the question: what class of PEPS can we prepare in BQP? That’s the central question that this talk addresses.
We need a few technical notions from the theory of PEPS. If we satisfy a technical condition called injectivity, then we can define (via a natural construction) a Hamiltonian called the parent Hamiltonian for which the frustration-free ground state is uniquely given by the PEPS. This injectivity condition is actually generic, and many natural families of states satisfy it, so intuitively it seems to be a reasonable restriction. (It should be said, however, that there are also many natural states which don’t satisfy injectivity, for example GHZ states.)
The algorithm is to start from a collection of entangled pairs and gradually “grow” a PEPS by growing the parent Hamiltonian term-by-term. If we add a term to the Hamiltonian, then we can try to project back to the ground space. This will be probabilistic and will require some work to get right. Then we can add a new term, etc. The final state is guaranteed to be the PEPS by the uniqueness of the ground state of an injective parent Hamiltonian.
In order to get the projection onto the new ground state after adding a new term to the Hamiltonian, we can use phase estimation. If you get the right measurement outcome (you successfully project onto the zero-energy ground space), then great! Just keep going. But if not, then you can undo the measurement with the QMA amplification protocol of Marriott and Watrous.
The bound on the run time is governed by a polynomial in the condition number of the PEPS projectors and the spectral gap of the sequence of parent Hamiltonians, as well as the number of vertices and edges in the PEPS graph.
This can also be generalized to PEPS which are so-called G-injective. This condition allows the method to be generalized to PEPS which have topological order, where the PEPS parent Hamiltonian has a degenerate ground space.
Good question by Rolando Somma: What happens if the ground state is stoquastic? Can you get any improvements?

Esther Hänggi and Marco Tomamichel:
The Link between Uncertainty Relations and Non-Locality

The main result: Nonlocality, which means achieving Bell value $latex beta$,
implies an uncertainty relation, namely, the incompatible bases used in the Bell experiment have overlap at most $latex c^*=f(beta)$.
This is a sort of converse to a result of Oppenheim and Wehner.
This has some practical implications: the maximum basis overlap is important for things like QKD and the security of noisy storage, and this result implies that it can be tested robustly by observing a Bell inequality violation.
The kind of uncertainty relation we are interested in is of the form $latex H(X|B) + H(Y|C) geq -log_2(c)$, and indeed because this is ETHZ work, we should expect a smoothed min-entropy version as well: $latex H^epsilon_{max}(X|B) + H^epsilon_{min}(Y|C) geq -log_2(c)$.
Actually, we need a variant to handle the case that both bases contain a “failure” outcome. This work replaces the maximum overlap $latex c$ (at least in the case of the BB84 bases) with $latex c^* = frac{1+epsilon}{2}$, where $latex epsilon$ is the probability of the failure outcome. (See the paper for more general formulation.)
This parameter $latex c^*$ is related to the maximum CHSH-type value $latex beta$ via a nice simple formula. Unlike some previous work, it is somewhat more “device-independent”, not requiring assumptions such as knowing that the systems are qubits. $latex beta$ in turn, we can relate to an entropic uncertainty relation.

Salman Beigi and Amin Gohari:
Information Causality is a Special Point in the Dual of the Gray-Wyner Region

What is information causality? Essentially the idea that the bound on random access coding should apply to general physical theories in a nonlocal setting. More precisely, Alice has independent bits $latex a_1,ldots,a_N$, Bob has $latex b in [n]$, Alice sends a message $latex x$ to Bob which becomes part of Bob’s state $latex beta$, and Bob tries to guess $latex a_b$. The bound is $latex H(x) geq sum_{i=1}^N I(a_i; beta|b=i)$. This holds classically because Bob can guess $latex a_1$ without giving up his ability to guess bits $latex a_2,ldots,a_n$. So he can keep guessing all the others and his mutual information just adds up; but it cannot ultimately add up to more that $latex H(x)$.
If you look at the details, the key ingredients are the consistency of mutual information (that is, it accurately represents correlations), data processing and the chain rule. These all apply to quantum states as well (hence the quantum random access bound).
Overview of the talk

  1. connection to Gray-Wyner problem
  2. generalizing information causality
  3. connecting to communication complexity.

The Gray-Wyner region is defined as the set of rates $latex (R_0,ldots,R_n)$ such that there exists a random variable e with $latex R_0 ge I(a;e)$ and $latex R_i ge H(a_i|e)$ for $latex i=1,ldots,N$.
Thm: For any physical theory satisfying the above “key ingredients” plus AMI (accessibility of mutual information, to be defined later), the point
$latex (H(X), H(a_1|beta_1, b=1), ldots, (H(a_N|beta_N,b=N))$ belongs to the Gray-Wyner region.
AMI basically means that the mutual information satisfies something like HSW. There is an issue here, which the authors are looking into, with whether the Holevo information or the accessible information is the right quantity to use.
This theorem means that the information causality argument can be generalized: using the same assumptions, we also obtain all of the inequalities that are known to apply to the Gray-Wyner region. Hence the reference in the title to the dual of the Gray-Wyner region; the dual consists of all linear inequalities that are satisfied by the entire Gray-Wyner region.
This improves our understanding of these bounds. It also gives some new lower bounds on the communication cost of simulating nonlocal correlations.

Marcus P. Da Silva, Steven T. Flammia, Olivier Landon-Cardinal, Yi-Kai Liu and David Poulin:
Practical characterization of quantum devices without tomography

Current experimental efforts at creating highly entangled quantum states of many-body systems and implementing unitary quantum gates quickly run into serious problems when they try to scale up their efforts. The dimensionality of the Hilbert space is a serious curse if your goal is to characterize your device using full tomography. For example, the 8-qubit experiment that was done by Häffner et al. in 2005 took a tremendous number of measurements and many many hours of classical post-processing.
But what if you could avoid doing tomography and still get a good characterization of your device? Perhaps you are only interested in one number, say, the fidelity. Can you directly compute the fidelity without going through tomography?
Yes! And you can do it much more efficiently if you are trying to produce a pure quantum state or a unitary quantum gate.
We’ll focus on the case where the target state is a pure state for simplicity. But everything carries over directly to the case of unitary quantum gates with only minor modifications.
The fidelity between your target state $latex psi = |psiranglelanglepsi|$ and the actual (arbitrary) state $latex rho$ in the lab is given by $latex F= mathrm{Tr}(psi rho)$.
If you expand the expression for F in everybody’s favorite basis, the Pauli basis, then you get
$latex F = sum_i mathrm{Pr}(i) x_i$ where $latex mathrm{Pr(i)} = mathrm{Tr}(psi hatsigma_i)^2/2^n$ is a probability distribution which depends only on the target state and
$latex x_i = mathrm{Tr}(rho hatsigma_i)/mathrm{Tr}(psi hatsigma_i)$ is something which can be estimated in an experiment by just measuring $latex hatsigma_i$. (We could expand in any orthonormal operator basis, or even a tight frame; the Pauli basis is just to be concrete and experimentally accessible.) But this is just an expectation value of a random variable, $latex = mathbb{E}(x)$, and moreover the variance is small, $latex mathrm{Var}(x) le 1$. To estimate the fidelity, we just sample from the probability distribution and then estimate $latex x_i$ and then average over a bunch of iid samples. We only need $latex O(1/epsilon^2)$ different observables to get an estimate to within additive error $latex pm epsilon$.
There are a couple of caveats. Sampling from the relevance distribution can in general be a hard problem for your classical computer (though you only have to sample a constant number of times). And the number of repeated measurements that you need to estimate the expectation value might have to scale with the dimension of the Hilbert space.
But there is some good news too. We get a lower bound on tomography which is $latex tildeOmega(d^2)$ for the sample complexity using Pauli measurements for pure states, whereas the direct fidelity estimation protocol uses only $O(d)$ samples. Moreover, there are lots of classes of states which avoid the dimensional scaling, like stabilizer states, Clifford gates, or W-states. For these states and gates, the entire procedure is polynomial in the amount of time and the number measurements.

Robin Blume-Kohout:
Paranoid tomography: Confidence regions for quantum hardware
(merged with)
Matthias Christandl and Renato Renner:
Reliable Quantum State Tomography

We consider the setting of a source of quantum states and some measurements, followed by the reconstruction of the quantum state or process. How can we get useful and reliable error bars on the reconstruction?
One problem that you might have when doing a naive linear inversion of a state estimate is that you could have negative eigenvalues on the estimate, which is non-physical. How might you deal with these? One way to do that is to use maximum likelihood estimation (MLE). How might you get an error bar? You could use “bootstrapping”, which is to resample data from the initial estimate to try to estimate the variance. But this can be unreliable: you can even construct counterexamples where it fails miserably! Is there a better way? You could try Bayesian update, where you begin with a prior on state space and then computer a posterior and an error bar. Now everything that you report will depend on the prior, which might be undesirable in e.g. a cryptographic setting. So we need a reliable way to report error bars.
There are some existing methods for producing error bars (e.g. compressed sensing, MPS tomography, and others), but here we are interested in systematically finding the optimal confidence region for an estimate, one that is adapted to the data itself. It is beyond the scope of this work currently to consider also the notion of efficiency; this is an open question.
The main result: They derive region estimators such that the true state is contained in the region with high probability (over the data), and where the size of the region is, in a certain sense, minimal.
Question: how do these relate to the classical statistical notions of confidence region and Baysian credible interval? Tell us in the comments!
The authors use two different techniques to achieve these results. RBK’s results use a likelihood ratio function in the setting of iid measurements. The region is the set of states for which the likelihood function is larger than some threshold $latex epsilon$ which was chosen a priori. The proof involves a classical statistical analysis. The construction by MC and RR is defined as the region in state space which contains the MLE with high probability with respect to Hilbert-Schmidt measure, but enlarged by a tiny factor. The proof in this case uses a postselection technique for quantum channels.
Here Matthias wants to give a new proof that was custom-made for QIP! It uses likelihood ratio regions for general measurements in a new way that all three authors came up with together. We can credit QIP with the result because the program committee forced Matthias+Renato to merge their talk with Robin Blume-Kohout’s. But the two papers had different assumptions, different techniques and different results! After discussing how to combine their proofs for pedagogical reasons (there wasn’t time to present both, and it also wouldn’t be so illuminating), they realized that they could use Matthias + Renato’s assumptions (which are more general) and Robin’s method (which is simpler) to get a result that is (more or less) stronger than either previous result.

Sandu Popescu:
The smallest possible thermal machines and the foundations of thermodynamics

Are there quantum effects in biology?
Biological systems are open, driven systems far from equilibrium.
Sandu: “In fact, I hope it is a long time before I myself reach equilibrium.”
This talk is somewhat classical, but it does talk about the physics of information, at small scales.
He talks about very small refrigerators/heat engines. It turns out there is no (necessary) tradeoff between size and efficiency; one can get Carnot efficiency with a three-level (qutrit) refrigerator, which is simultaneously optimal for both size and efficiency.
It was a great talk, but your humble bloggers had mostly reached their ground state by this point in the conference… see the photo at the top of the post.

5 Replies to “QIP 2012 Day 5”

  1. Y’all asked “Question: how do these relate to the classical statistical notions of confidence region and Baysian credible interval? Tell us in the comments!“.
    They’re the same. This is classical statistics … in the sense that we have a classical parameter (the density matrix), and classical data (whatever we measured), and we’re trying to say something about the parameter.
    Very very little of what has been done so far in quantum state/process tomography lies outside of “classical” statistical inference. Changing that state of affairs is an intriguing challenge…

    1. One other question: can you quantify how much tighter your regions are compared with a naive “large deviation” region estimator such as the ones you get in other settings, e.g. compressed sensing?

      1. Yes and no. This is a good question, but deserves quite a long answer, and I’m currently on the beach in Key Largo, so I’m not going to try. 🙂
        So, short answer. First of all, it really depends on how naive the large deviation bound is. The only fair comparison, I think, is vs. the best possible large deviation bound in any given metric (which would yield the smallest possible spherical-in-that-metric confidence regions). I have no idea whether any given paper gets close to that optimum, and I don’t think anybody really knows what the optimum is for finite sample size!
        But even against such an optimal large-deviation-based region, our approach is (in some circumstances) massively better. Why? Because for any given experiment, the natural confidence regions have shapes that depend on the true state. For some states, they tend to be balls in the 2-norm. For other states, they tend to be balls in Bures metric. Any estimator that imposes a metric in advance will be well-adapted to at most one of these cases! In the other case, it will (more or less) end up circumscribing a sphere around a cigar. In the worst cases, this will overestimate the uncertainty in some dimensions by a factor of O(sqrt(N)).
        That’s the best I can do in limited space and time. And, yes, before you ask, the “natural” confidence regions do correspond roughly to balls in some metric. What metric? The Fisher metric induced on quantum states by the measurement you made. More or less. Which is why we make such a big deal about our procedure being automatically data-adapted — you don’t have to go figuring out what the natural metric is.
        P.S. And, no, our result is not the last word. Just a good start. There are some neat problems left to solve.

  2. As one of the authors of the papers “Quantum correlations require multipartite information principles” and “An operational framework for nonlocality”, I want to point out that the summary of both papers presented in this page is very inaccurate, and only relates to the second article.
    Contrary to the “moral” stated above, the main result of “An operational framework for nonlocality” is that the original (mathematical) definition by Svetlichny of genuine tripartite non-locality is flawed. This is due to the existence of tripartite boxes that are “bipartite local” according to Svetlichny’s criterion, but nevertheless allow to violate bipartite Bell inequalities when properly wired. We propose an alternative definition of bipartite locality, based on TOBL models, which is paradox-free.
    The aim of the first paper, “Quantum correlations require multipartite information principles”, is to demonstrate that bipartite information principles, like Information Causality, are not enough to recover the set of quantum correlations. This is proven by showing the existence of a non-quantum tripartite box that has the property that any number of copies of it cannot be engineered to violate a bipartite Bell inequality (and, hence, cannot be used to violate bipartite information principles).

    1. Thank you for your corrections and clarifications, Miguel. We did our best to be accurate and comprehensive, but alas, we couldn’t always achieve that ideal.

Leave a Reply

Your email address will not be published. Required fields are marked *