Codes, Geometry and Random Structures: Day 2

Codes, Geometry and Random Structures

Graeme Smith, Detecting incapacity, based on 1108.1807

A central question in quantum information theory is determining when a channel has nonzero quantum capacity. Pretty much all we know here is that there are a few criteria for proving a channel has zero quantum capacity: PPT channels can’t transmit entanglement (since LOCC can’t change PPT states to non-PPT states) nor can anti-degradable channels (because of no-cloning). These two arguments appear to both be pretty specific. Can we put them on the same footing, and hopefully also derive some new criteria?
That’s what this paper does. The talk was good, but the paper also describes the results clearly.
Here is a sample of the results.
Assume that R is unphysical on set S; e.g. S is the set of quantum states and R is the transpose map. Suppose that for any physical map D, there exists a physical map D^* with Rcirc D = D^* circ R. If R is the partial transpose then D^* is simply the operation you get from taking the complex conjugate of all the Kraus operators.
Their theorem then states that if Rcirc N is physical, N cannot transmit the states in S. The proof is a simple exercise in composition.
In this case we say that N “physicalizes” R or, equivalently, R “incapacitates” N.
This is not quite enough to get the no-cloning criterion, but a mild generalization will do the job. Graeme gave a nice explanation in terms of how teleporting information involves going backwards in time through the EPR pairs, and if PPT states are used, then the time-traveling information gets confused and doesn’t know whether to get forwards or backwards. However, if this principle is implicit in the proof, then it’s very implicit.

Jean-Pierre Tillich, Quantum turbo codes with unbounded minimum distance and excellent error-reducing performance

LDPC codes are successful classically, but quantumly they suffer many drawbacks: their distance isn’t so good (but see 0903.0566), their Tanner graphs have short cycles (specifically 4-cycles), and iterative decoding doesn’t work. One particular no-go theorem is that there are no convolutional encoder that is both recursive and non-catastrophic (0712.2888).
In this talk, Tillich discusses a catastrophic and recursive encoder. It achieves rate 1/8 and somewhat surprisingly, it achieves minimum distance of Omega(log(n) / loglog(n)) with high probability. He conjectures that this should be Omega(log n) for the right choice of interleaver.
The resulting code can be thought of not so much as “error-correcting” but “error-reducing.” Error rate p=0.14 becomes 10-3, and p=0.105 becomes 10-4. This compares favorably with the toric code threshold of p=0.15. He suspects that the limitation here comes from the iterative decoder.

Jurg Wullschleger, The decoupling theorem

The decoupling theorem is arguably the Book proof of most quantum coding theorems. The encoder applies a random unitary (in some problem-dependent way) and transmits part of the output to the receiver. Treat this part as being traced out, and if she keeps part, then consider it to be controlled by Eve. If the resulting state has the reference system “decoupled” from Eve, then since the remaining parts of the state (controlled by Bob) purify everything, then a local unitary on Bob’s side can give him pure entangled states with both the reference, and separately with Eve. This allows the complicated task of transmitting quantum information reliably (which is hard enough that proving that the coherent information was the quantum capacity originally took a lot of difficult technical work) can be reduced to the simpler goal of destroying correlations.
Decoupling theorems were originally developed for the state-merging problem, by Horodecki-Oppenheim-Winter ’05 (where it was “Lemma 5” or something similarly marginal). Then it was further developed by quant-ph/0606225, where it was called a Theorem. Then in , it moved to the title. So it took some time for the community to fully appreciate how useful this tool is.
Some of these tools use smoothed min- and max-entropies, which can be thought of as one-shot variants of von Neumann entropy that are either pessimistic or optimistic, depending on application. Amusingly, the smoothed max-entropy is not defined by taking a smoothed rank, but is defined in order to satisfy a relation that we’d like (which also holds for pure states). This is reminiscent of the speed of light, which is an integer number of meters/second by definition.
For any pure state rho^{XYE}, define the smoothed max entropy to be
H_{max}^epsilon(X|Y))_rho = - H_{min}^epsilon H(X|E)_rho.
Other definitions are also used, but are pretty close.
In this talk, Jurg described a converse theorem to the decoupling theorem, and explained many scenarios in which it applies. See his paper for the details.

Frédéric Dupuis , Classical coding via decoupling

Decoupling is great for sending quantum messages, and gives simpler proofs than even the original HSW proof of the classical capacity (or any other known proofs). Thus it’s interesting to find a decoupling-based proof of the HSW theorem not only for the sake of the unification of knowledge, but also so we can get a simpler proof. This is essentially what is achieved here, although only when the inputs are uniformly distributed.

Mark Wilde, Polar codes for classical, private, and quantum communication

We are really good at dealing with correlated information, with tools like Slepian-Wolf, side-information-assisted hypothesis testing and multiple-access channel codes. So we can treat inputs to channels in this way. We can perform simple mathbb{F}_2-linear maps on the inputs so that we can manipulate n binary channels each with capacity C become roughly nC channels with capacity roughly 1, and n(1-C) channels roughly with capacity 0.
Quantumly, much of this goes through, but we need the ability to simultaneously apply typical projections. This key lemma is Lemma 3 in Pranab Sen’s paper arXiv:1109.0802.

1 - {rm tr} Pi_N cdots Pi_1 rho Pi_1 cdots Pi_N leq 2 sqrt{sum_{i=1}^N {rm tr} (I-Pi_i)rho} .

Mark calls this a “noncommutative union bound.” Note that using a Winter-style “gentle measurement lemma” puts the square root inside the sum, which for this application cuts the achievable rate in half.
For private communication, we can polarize into channels of four types: either good for Bob and good for Eve, bad for both, or good for one and bad for the other. We send random bits into the channels that are good for both, and arbitrary bits into the ones that are bad for both. Information bits go into the ones that are good for Bob and bad for Eve, and shared secret key goes into the ones that are bad for Bob (so he effectively gets the message anyway), and good for Eve.
This generalizes to quantum communication using Devetak’s technique from quant-ph/0304127. Channel inputs are

Bob

Eve

input

Good

Good

|+rangle

Good

Bad

information

Bad

Good

shared entanglement

Bad

Bad

arbitrary

A big open question is to make the decoding efficient, and to figure out which channels are good.

Joseph M. Renes, Quantum information processing as classical processing of complementary information

The main theme is a CSS-like one: we can often treat quantum information as being classical information about two complementary observables.
For example, if you coherently measure in both the X and Z basis, then you’ve effectively done a swap with the qubit you’re measuring.
This principle is related to uncertainty principles
H(X^A|B) + H(Z^A|C) geq log 1/c
H(X^A|B) + H(Z^A|B) geq log 1/c + H(A|B)
Here c is the largest overlap between eigenvector of X and Z operators.
Rearranging the second inequality, we see
H(A|B) leq  H(X^A|B) + H(Z^A|B)  - log d.
Thus, entanglement between A and B corresponds to the ability to predict complementary observables.
Many quantum information protocols can be understood in this way; e.g. entanglement distillation can be thought of as Alice and Bob having some correlated amplitude and phase information, and trying to do information reconciliation on these quantities.
Joe also talked about quantum polar codes, which create “synthetic” channels with suitable uses of CNOTs on the inputs. The idea is that CNOTs act on Z information in one direction and X information in the opposite direction. And a decoder need only separately figure out amplitude and phase information. There are subtleties: this information can be correlated, and it can exist on the same qubits. When amplitude and phase information is found on the same qubit, we use an entanglement assistance.
This gives efficient decoding for Pauli channels and erasure channels. And in numerical experiments, the entanglement assistance appears to be often unnecessary.

2 Replies to “Codes, Geometry and Random Structures: Day 2”

  1. Well, not-so-bad for a first try. The “did not parse” expression was {H_1,H_2,cdots,H_n,cdots} (the sequence of one-particle, two-particle, … n-particle Hamiltonians), and physically speaking, the question is what n-dependence(s) of tensor network rank r is compatible with a well-behaved large-n thermodynamic limit? Here the dynamics are pulled back onto a “spin-dust” tensor network that has no intrinsic spatial dimensionality.

  2. With regard to the overall theme of information theory, condensed matter, thermodynamics, and nonequilibrium transport, here is a rather general question that may possibly relate to Deping Ye’s talk tomorrow, or to Tobias Osborne’s talk blogged earlier by Steve (or to any of the other talks at these two fine conferences).
    Suppose on a {n}-particle Hilbert space {\mathcal{H}_n} we specify a quantum Hamiltonian {H_n} that commutes with some Hermitian operator {M_n} (for definiteness this might be a system of {n} qubits interacting pairwise via exchange interactions). Then for entropy functions {S_n(\langle{H}_n\rangle,\langle{M}_n\rangle|\mathcal{H}_n)} defined in the usual way we can proceed to study (for example) the thermodynamic phases of the system, always choosing the sequence {H_1,H_2,…\,H_n,…} so that the thermodynamic limit {\lim_{n\rightarrow\infty}S_n} is well-behaved.
    More generally, we can specify a rank-{r} immersed Kählerian submanifold {\mathcal{K}_n^r \subset \mathcal{H}_n} and pullback the entropy calculations to obtain a set of entropy functions {S_n^r(\langle{H}_n\rangle,\langle{M}_n\rangle|\mathcal{K}_n^r)} for {r\in (1,2^{\dim \mathcal{H}_n})}, that is, entropy functions indexed by both particle number {n} and tensor rank {r}.
    A natural question then is, what is the appropriate scaling for the pullback tensor rank {r} in the thermodynamic limit {n\rightarrow\infty}? Obviously, one such scaling is the classical physics immersion {\forall n\colon r=1}. But presumably this classical state-space is not accurate to simulate (for example) spin-liquid phase diagrams. Are we perhaps so fortunate (from a simulation efficiency point-of-view) that {r= c n^0} yields reasonably accurate spin-liquid physics for some constant {c>1}? Or {r\propto n^m} for some integer {m}? For what condensed matter phases (if any) is it known that a full-rank immersion {r\propto 2^{\dim \mathcal{H}_n}} is required to obtain the correct phase diagram in the thermodynamic limit? How exactly do renormalization-group universality arguments work when tensor rank {r} (or some power of it) is viewed as an adjustable thermodynamic parameter?
    If anyone can provide links to references that touch upon these (to me, tough) thermodynamical questions, I would be grateful! 🙂
    (And hereby I tender apologies, in advance, should it happen that the rather experimental LaTeX code in the above fails to parse)

Leave a Reply

Your email address will not be published. Required fields are marked *