**Codes, Geometry and Random Structures**

* Graeme Smith, Detecting incapacity, based on 1108.1807*

A central question in quantum information theory is determining when a channel has nonzero quantum capacity. Pretty much all we know here is that there are a few criteria for proving a channel has *zero* quantum capacity: PPT channels can’t transmit entanglement (since LOCC can’t change PPT states to non-PPT states) nor can anti-degradable channels (because of no-cloning). These two arguments appear to both be pretty specific. Can we put them on the same footing, and hopefully also derive some new criteria?

That’s what this paper does. The talk was good, but the paper also describes the results clearly.

Here is a sample of the results.

Assume that R is unphysical on set S; e.g. S is the set of quantum states and R is the transpose map. Suppose that for any physical map D, there exists a physical map D^* with . If R is the partial transpose then is simply the operation you get from taking the complex conjugate of all the Kraus operators.

Their theorem then states that if is physical, N cannot transmit the states in S. The proof is a simple exercise in composition.

In this case we say that *N “physicalizes” R* or, equivalently, *R “incapacitates” N*.

This is not quite enough to get the no-cloning criterion, but a mild generalization will do the job. Graeme gave a nice explanation in terms of how teleporting information involves going backwards in time through the EPR pairs, and if PPT states are used, then the time-traveling information gets confused and doesn’t know whether to get forwards or backwards. However, if this principle is implicit in the proof, then it’s *very* implicit.

* Jean-Pierre Tillich, Quantum turbo codes with unbounded minimum distance and excellent error-reducing performance*

LDPC codes are successful classically, but quantumly they suffer many drawbacks: their distance isn’t so good (but see 0903.0566), their Tanner graphs have short cycles (specifically 4-cycles), and iterative decoding doesn’t work. One particular no-go theorem is that there are no convolutional encoder that is both recursive and non-catastrophic (0712.2888).

In this talk, Tillich discusses a catastrophic and recursive encoder. It achieves rate 1/8 and somewhat surprisingly, it achieves minimum distance of with high probability. He conjectures that this should be for the right choice of interleaver.

The resulting code can be thought of not so much as “error-correcting” but “error-reducing.” Error rate p=0.14 becomes 10^{-3}, and p=0.105 becomes 10^{-4}. This compares favorably with the toric code threshold of p=0.15. He suspects that the limitation here comes from the iterative decoder.

* Jurg Wullschleger, The decoupling theorem*

The decoupling theorem is arguably the Book proof of most quantum coding theorems. The encoder applies a random unitary (in some problem-dependent way) and transmits part of the output to the receiver. Treat this part as being traced out, and if she keeps part, then consider it to be controlled by Eve. If the resulting state has the reference system “decoupled” from Eve, then since the remaining parts of the state (controlled by Bob) purify everything, then a local unitary on Bob’s side can give him pure entangled states with both the reference, and separately with Eve. This allows the complicated task of transmitting quantum information reliably (which is hard enough that proving that the coherent information was the quantum capacity originally took a lot of difficult technical work) can be reduced to the simpler goal of *destroying* correlations.

Decoupling theorems were originally developed for the state-merging problem, by Horodecki-Oppenheim-Winter ’05 (where it was “Lemma 5” or something similarly marginal). Then it was further developed by quant-ph/0606225, where it was called a Theorem. Then in , it moved to the title. So it took some time for the community to fully appreciate how useful this tool is.

Some of these tools use smoothed min- and max-entropies, which can be thought of as one-shot variants of von Neumann entropy that are either pessimistic or optimistic, depending on application. Amusingly, the smoothed max-entropy is not defined by taking a smoothed rank, but is defined in order to satisfy a relation that we’d like (which also holds for pure states). This is reminiscent of the speed of light, which is an integer number of meters/second by definition.

For any pure state , define the smoothed max entropy to be

.

Other definitions are also used, but are pretty close.

In this talk, Jurg described a converse theorem to the decoupling theorem, and explained many scenarios in which it applies. See his paper for the details.

* Frédéric Dupuis , Classical coding via decoupling*

Decoupling is great for sending quantum messages, and gives simpler proofs than even the original HSW proof of the classical capacity (or any other known proofs). Thus it’s interesting to find a decoupling-based proof of the HSW theorem not only for the sake of the unification of knowledge, but also so we can get a simpler proof. This is essentially what is achieved here, although only when the inputs are uniformly distributed.

*Mark Wilde, Polar codes for classical, private, and quantum communication*

We are really good at dealing with correlated information, with tools like Slepian-Wolf, side-information-assisted hypothesis testing and multiple-access channel codes. So we can treat inputs to channels in this way. We can perform simple -linear maps on the inputs so that we can manipulate n binary channels each with capacity C become roughly nC channels with capacity roughly 1, and n(1-C) channels roughly with capacity 0.

Quantumly, much of this goes through, but we need the ability to simultaneously apply typical projections. This key lemma is Lemma 3 in Pranab Sen’s paper arXiv:1109.0802.

.

Mark calls this a “noncommutative union bound.” Note that using a Winter-style “gentle measurement lemma” puts the square root inside the sum, which for this application cuts the achievable rate in half.

For private communication, we can polarize into channels of four types: either good for Bob and good for Eve, bad for both, or good for one and bad for the other. We send random bits into the channels that are good for both, and arbitrary bits into the ones that are bad for both. Information bits go into the ones that are good for Bob and bad for Eve, and shared secret key goes into the ones that are bad for Bob (so he effectively gets the message anyway), and good for Eve.

This generalizes to quantum communication using Devetak’s technique from quant-ph/0304127. Channel inputs are

Bob | Eve | input |
---|---|---|

Good | Good | |

Good | Bad | information |

Bad | Good | shared entanglement |

Bad | Bad | arbitrary |

A big open question is to make the decoding efficient, and to figure out which channels are good.

*Joseph M. Renes, Quantum information processing as classical processing of complementary information*

The main theme is a CSS-like one: we can often treat quantum information as being classical information about two complementary observables.

For example, if you coherently measure in both the X and Z basis, then you’ve effectively done a swap with the qubit you’re measuring.

This principle is related to uncertainty principles

Here c is the largest overlap between eigenvector of X and Z operators.

Rearranging the second inequality, we see

.

Thus, entanglement between A and B corresponds to the ability to predict complementary observables.

Many quantum information protocols can be understood in this way; e.g. entanglement distillation can be thought of as Alice and Bob having some correlated amplitude and phase information, and trying to do information reconciliation on these quantities.

Joe also talked about quantum polar codes, which create “synthetic” channels with suitable uses of CNOTs on the inputs. The idea is that CNOTs act on Z information in one direction and X information in the opposite direction. And a decoder need only separately figure out amplitude and phase information. There are subtleties: this information can be correlated, and it can exist on the same qubits. When amplitude and phase information is found on the same qubit, we use an entanglement assistance.

This gives efficient decoding for Pauli channels and erasure channels. And in numerical experiments, the entanglement assistance appears to be often unnecessary.

Well, not-so-bad for a first try. The “did not parse” expression was (the sequence of one-particle, two-particle, … -particle Hamiltonians), and physically speaking, the question is what -dependence(s) of tensor network rank is compatible with a well-behaved large- thermodynamic limit? Here the dynamics are pulled back onto a “spin-dust” tensor network that has no intrinsic spatial dimensionality.

With regard to the overall theme of information theory, condensed matter, thermodynamics, and nonequilibrium transport, here is a rather general question that may possibly relate to Deping Ye’s talk tomorrow, or to Tobias Osborne’s talk blogged earlier by Steve (or to any of the other talks at these two fine conferences).

Suppose on a -particle Hilbert space we specify a quantum Hamiltonian that commutes with some Hermitian operator (for definiteness this might be a system of qubits interacting pairwise via exchange interactions). Then for entropy functions defined in the usual way we can proceed to study (for example) the thermodynamic phases of the system, always choosing the sequence so that the thermodynamic limit is well-behaved.

More generally, we can specify a rank- immersed Kählerian submanifold and pullback the entropy calculations to obtain a set of entropy functions for , that is, entropy functions indexed by both particle number and tensor rank .

A natural question then is, what is the appropriate scaling for the pullback tensor rank in the thermodynamic limit ? Obviously, one such scaling is the classical physics immersion . But presumably this classical state-space is not accurate to simulate (for example) spin-liquid phase diagrams. Are we perhaps so fortunate (from a simulation efficiency point-of-view) that yields reasonably accurate spin-liquid physics for some constant ? Or for some integer ? For what condensed matter phases (if any) is it known that a full-rank immersion is required to obtain the correct phase diagram in the thermodynamic limit? How exactly do renormalization-group universality arguments work when tensor rank (or some power of it) is viewed as an adjustable thermodynamic parameter?

If anyone can provide links to references that touch upon these (to me, tough) thermodynamical questions, I would be grateful! 🙂

(And hereby I tender apologies, in advance, should it happen that the rather experimental LaTeX code in the above fails to parse)