{"id":5568,"date":"2011-10-25T06:59:35","date_gmt":"2011-10-25T13:59:35","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=5568"},"modified":"2011-10-25T06:59:35","modified_gmt":"2011-10-25T13:59:35","slug":"codes-geometry-and-random-structures-day-2","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2011\/10\/25\/codes-geometry-and-random-structures-day-2\/","title":{"rendered":"Codes, Geometry and Random Structures: Day 2"},"content":{"rendered":"<p><!-- Now updated semi-live throughout the day.  Apologies to the RSS folks. --><\/p>\n<p style=\"text-align: center\"><strong><a href=\"http:\/\/www.crm.umontreal.ca\/Codes11\/horaire_e.html\">Codes, Geometry and Random Structures<\/a><\/strong><\/p>\n<p style=\"padding-left: 30px\"><i> Graeme Smith, <a href=\"http:\/\/www.crm.umontreal.ca\/Codes11\/pdf\/smith.pdf\">Detecting incapacity<\/a>, based on <a href=\"http:\/\/arxiv.org\/abs\/1108.1807\">1108.1807<\/a><\/i><\/p>\n<p>A central question in quantum information theory is determining when a channel has nonzero quantum capacity.  Pretty much all we know here is that there are a few criteria for proving a channel has <i>zero<\/i> quantum capacity: PPT channels can&#8217;t transmit entanglement (since LOCC can&#8217;t change PPT states to non-PPT states) nor can anti-degradable channels (because of no-cloning).  These two arguments appear to both be pretty specific.  Can we put them on the same footing, and hopefully also derive some new criteria?<br \/>\nThat&#8217;s what this paper does.  The talk was good, but the paper also describes the results clearly.<br \/>\nHere is a sample of the results.<br \/>\nAssume that R is unphysical on set S; e.g. S is the set of quantum states and R is the transpose map.  Suppose that for any physical map D, there exists a physical map D^* with $latex Rcirc D = D^* circ R$.  If R is the partial transpose then $latex D^*$ is simply the operation you get from taking the complex conjugate of all the Kraus operators.<br \/>\nTheir theorem then states that if $latex Rcirc N$ is physical, N cannot transmit the states in S.  The proof is a simple exercise in composition.<br \/>\nIn this case we say that <i>N &#8220;physicalizes&#8221; R<\/i> or, equivalently, <i>R &#8220;incapacitates&#8221; N<\/i>.<br \/>\nThis is not quite enough to get the no-cloning criterion, but a mild generalization will do the job.  Graeme gave a nice explanation in terms of how teleporting information involves going backwards in time through the EPR pairs, and if PPT states are used, then the time-traveling information gets confused and doesn&#8217;t know whether to get forwards or backwards.   However, if this principle is implicit in the proof, then it&#8217;s <i>very<\/i> implicit.<\/p>\n<p style=\"padding-left: 30px\"><i> Jean-Pierre Tillich, <a href=\"http:\/\/www.crm.umontreal.ca\/Codes11\/pdf\/tillich.pdf\">Quantum turbo codes with unbounded minimum distance and excellent error-reducing performance<\/a><\/i><\/p>\n<p>LDPC codes are successful classically, but quantumly they suffer many drawbacks: their distance isn&#8217;t so good (but see <a href=\"http:\/\/arxiv.org\/abs\/0903.0566\">0903.0566<\/a>), their Tanner graphs have short cycles (specifically 4-cycles), and iterative decoding doesn&#8217;t work.  One particular no-go theorem is that there are no convolutional encoder that is both recursive and non-catastrophic (<a href=\"http:\/\/arxiv.org\/abs\/0712.2888\">0712.2888<\/a>).<br \/>\nIn this talk, Tillich discusses a catastrophic and recursive encoder.  It achieves rate 1\/8 and somewhat surprisingly, it achieves minimum distance of $latex Omega(log(n) \/ loglog(n))$ with high probability.  He conjectures that this should be $latex Omega(log n)$ for the right choice of interleaver.<br \/>\nThe resulting code can be thought of not so much as &#8220;error-correcting&#8221; but &#8220;error-reducing.&#8221;  Error rate p=0.14 becomes 10<sup>-3<\/sup>, and p=0.105 becomes 10<sup>-4<\/sup>.  This compares favorably with the toric code threshold of p=0.15.  He suspects that the limitation here comes from the iterative decoder.<\/p>\n<p style=\"padding-left: 30px\"><i> Jurg Wullschleger, <a href=\"http:\/\/www.crm.umontreal.ca\/Codes11\/pdf\/wullschleger.pdf\">The decoupling theorem<\/a><\/i><\/p>\n<p>The decoupling theorem is arguably the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Proofs_from_THE_BOOK\">Book proof<\/a> of most quantum coding theorems.  The encoder applies a random unitary (in some problem-dependent way) and transmits part of the  output to the receiver.  Treat this part as being traced out, and if she keeps part, then consider it to be controlled by Eve.  If the resulting state has the reference system &#8220;decoupled&#8221; from Eve, then since the remaining parts of the state (controlled by Bob) purify everything, then a local unitary on Bob&#8217;s side can give him pure entangled states with both the reference, and separately with Eve.  This allows the complicated task of transmitting quantum information reliably (which is hard enough that proving that the coherent information was the quantum capacity originally took a lot of difficult technical work) can be reduced to the simpler goal of <i>destroying<\/i> correlations.<br \/>\nDecoupling theorems were originally developed for the state-merging problem, by Horodecki-Oppenheim-Winter &#8217;05 (where it was &#8220;Lemma 5&#8221; or something similarly marginal).  Then it was further developed by <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0606225\">quant-ph\/0606225<\/a>, where it was called a Theorem.  Then in <a><\/a>, it moved to the title.   So it took some time for the community to fully appreciate how useful this tool is.<br \/>\nSome of these tools use smoothed min- and max-entropies, which can be thought of as one-shot variants of von Neumann entropy that are either pessimistic or optimistic, depending on application.   Amusingly, the smoothed max-entropy is not defined by taking a smoothed rank, but is defined in order to satisfy a relation that we&#8217;d like (which also holds for pure states).   This is reminiscent of the speed of light, which is an integer number of meters\/second by definition.<br \/>\nFor any pure state $latex rho^{XYE}$, define the smoothed max entropy to be<br \/>\n$latex H_{max}^epsilon(X|Y))_rho = &#8211; H_{min}^epsilon H(X|E)_rho$.<br \/>\nOther definitions are also used, but are pretty close.<br \/>\nIn this talk, Jurg described a converse theorem to the decoupling theorem, and explained many scenarios in which it applies.  See <a href=\"http:\/\/arxiv.org\/abs\/1012.6044\">his paper<\/a> for the details.<\/p>\n<p style=\"padding-left: 30px\"><i> Fr\u00e9d\u00e9ric Dupuis , <a href=\"http:\/\/www.crm.umontreal.ca\/Codes11\/pdf\/dupuis.pdf\">Classical coding via decoupling<\/a><\/i><\/p>\n<p>Decoupling is great for sending quantum messages, and gives simpler proofs than even the original HSW proof of the classical capacity (or any other known proofs).  Thus it&#8217;s interesting to find a decoupling-based proof of the HSW theorem not only for the sake of the unification of knowledge, but also so we can get a simpler proof.  This is essentially what is achieved here, although only when the inputs are uniformly distributed.<\/p>\n<p style=\"padding-left: 30px\"><i>Mark Wilde, <a href=\"http:\/\/www.crm.umontreal.ca\/Codes11\/pdf\/wilde.pdf\">Polar codes for classical, private, and quantum communication<\/a><\/i><\/p>\n<p>We are really good at dealing with correlated information, with tools like Slepian-Wolf, side-information-assisted hypothesis testing and multiple-access channel codes.  So we can treat inputs to channels in this way.  We can perform simple $latex mathbb{F}_2$-linear maps on the inputs so that we can manipulate n binary channels each with capacity C become roughly nC channels with capacity roughly 1, and n(1-C) channels roughly with capacity 0.<br \/>\nQuantumly, much of this goes through, but we need the ability to simultaneously apply typical projections.  This key lemma is Lemma 3 in Pranab Sen&#8217;s paper <a href=\"http:\/\/arxiv.org\/abs\/1109.0802\">arXiv:1109.0802<\/a>.<\/p>\n<blockquote><p>$latex 1 &#8211; {rm tr} Pi_N cdots Pi_1 rho Pi_1 cdots Pi_N leq 2 sqrt{sum_{i=1}^N {rm tr} (I-Pi_i)rho}$ .<\/p><\/blockquote>\n<p>Mark calls this a &#8220;noncommutative union bound.&#8221;  Note that using a Winter-style &#8220;gentle measurement lemma&#8221; puts the square root inside the sum, which for this application cuts the achievable rate in half.<br \/>\nFor private communication, we can polarize into channels of four types: either good for Bob and good for Eve, bad for both, or good for one and bad for the other.  We send random bits into the channels that are good for both, and arbitrary bits into the ones that are bad for both.  Information bits go into the ones that are good for Bob and bad for Eve, and shared secret key goes into the ones that are bad for Bob (so he effectively gets the message anyway), and good for Eve.<br \/>\nThis generalizes to quantum communication using Devetak&#8217;s technique from <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0304127\">quant-ph\/0304127<\/a>.   Channel inputs are<\/p>\n<table>\n<tr>\n<th>Bob<\/p>\n<th>Eve<\/p>\n<th>input<\/p>\n<tr>\n<td>Good<\/p>\n<td>Good<\/p>\n<td>$latex |+rangle$<\/p>\n<tr>\n<td>Good<\/p>\n<td>Bad<\/p>\n<td>information<\/p>\n<tr>\n<td>Bad<\/p>\n<td>Good<\/p>\n<td>shared entanglement<\/p>\n<tr>\n<td>Bad<\/p>\n<td>Bad<\/p>\n<td>arbitrary<br \/>\n<\/table>\n<p>A big open question is to make the decoding efficient, and to figure out which channels are good.<\/p>\n<p style=\"padding-left: 30px\"><i>Joseph M. Renes, <a href=\"http:\/\/www.crm.umontreal.ca\/Codes11\/pdf\/renes.pdf\">Quantum information processing as classical processing of complementary information<\/a><\/i><\/p>\n<p>The main theme is a CSS-like one: we can often treat quantum information as being classical information about two complementary observables.<br \/>\nFor example, if you coherently measure in both the X and Z basis, then you&#8217;ve effectively done a swap with the qubit you&#8217;re measuring.<br \/>\nThis principle is related to uncertainty principles<br \/>\n$latex H(X^A|B) + H(Z^A|C) geq log 1\/c$<br \/>\n$latex H(X^A|B) + H(Z^A|B) geq log 1\/c + H(A|B)$<br \/>\nHere c is the largest overlap between eigenvector of X and Z operators.<br \/>\nRearranging the second inequality, we see<br \/>\n$latex H(A|B) leq  H(X^A|B) + H(Z^A|B)  &#8211; log d$.<br \/>\nThus, entanglement between A and B corresponds to the ability to predict complementary observables.<br \/>\nMany quantum information protocols can be understood in this way; e.g. entanglement distillation can be thought of as Alice and Bob having some correlated amplitude and phase information, and trying to do information reconciliation on these quantities.<br \/>\nJoe also talked about quantum polar codes, which create &#8220;synthetic&#8221; channels with suitable uses of CNOTs on the inputs.  The idea is that CNOTs act on Z information in one direction and X information in the opposite direction.  And a decoder need only separately figure out amplitude and phase information.  There are subtleties: this information can be correlated, and it can exist on the same qubits.  When amplitude and phase information is found on the same qubit, we use an entanglement assistance.<br \/>\nThis gives efficient decoding for Pauli channels and erasure channels.    And in numerical experiments, the entanglement assistance appears to be often unnecessary.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Codes, Geometry and Random Structures Graeme Smith, Detecting incapacity, based on 1108.1807 A central question in quantum information theory is determining when a channel has nonzero quantum capacity. Pretty much all we know here is that there are a few criteria for proving a channel has zero quantum capacity: PPT channels can&#8217;t transmit entanglement (since &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/dabacon.org\/pontiff\/2011\/10\/25\/codes-geometry-and-random-structures-day-2\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Codes, Geometry and Random Structures: Day 2&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[32],"tags":[],"class_list":["post-5568","post","type-post","status-publish","format-standard","hentry","category-general"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5568","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=5568"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5568\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=5568"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=5568"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=5568"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}