{"id":5728,"date":"2011-12-13T12:25:33","date_gmt":"2011-12-13T19:25:33","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=5728"},"modified":"2011-12-13T12:25:33","modified_gmt":"2011-12-13T19:25:33","slug":"qip-2012-day-1","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2011\/12\/13\/qip-2012-day-1\/","title":{"rendered":"QIP 2012 Day 1"},"content":{"rendered":"<h3 style=\"text-align: center\"><u>Sergey Bravyi<\/u>, <a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#Bravyi\">Topological qubits: stability against thermal noise.<\/a><br \/>\nJoint work with Jeongwan Haah, based on arXiv:1105.4159.<\/h3>\n<p><a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-Sergey-bath.jpg?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-5774\" title=\"Topologically protected qubits in a thermal bath\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-Sergey-bath-1024x765.jpg?resize=525%2C392&#038;ssl=1\" alt=\"\" width=\"525\" height=\"392\" \/><\/a><br \/>\nSergey opened his talk with an image of a bagel in a Solfatara landscape, meant to evoke topogically-protected qubits (get it? a bagel?) in a thermal (steam) bath.<br \/>\nThe question is whether we can engineer an energy landscape with high barriers between encoded states so as to enable passive quantum error correction; i.e. to create a topologically protected qubit. Here are the main ingredients:<\/p>\n<ul>\n<li>Stabilizer codes on lattice with local generators and code distance $latex dapprox L$, where $latex L$ is the lattice size.<\/li>\n<li>A code Hamiltonian with energy penalty for states outside logical subspace. The Hamiltonian is defined simply by having a violated stabilizer cost unit energy.<\/li>\n<li>Thermal noise: should be described by a Markovian master equation with local jump operators. The spectral density should obey detailed balance; this will be the place where temperature enters.<\/li>\n<li>Finally, there should be a decoder. Here we consider the RG (renormalization group) decoder.<\/li>\n<\/ul>\n<p>Kitaev\u2019s <a href=\"https:\/\/en.wikipedia.org\/wiki\/Toric_code\">toric code<\/a> (1997) was the first example of a topological QECC. If you\u2019re not familiar with it, then you should read about that instead of the current results. The key idea is that the errors are strings that wrap around the torus, and so have length at least $latex L$. However, since a partial string incurs only a constant energy penalty, the <em>memory time<\/em> (=amount of time until the state degrades by a constant amount) is $latex e^{-2beta}$ [<a href=\"http:\/\/arxiv.org\/abs\/0810.4584\">arXiv:0810.4584<\/a>] independent of the lattice size! This is pretty disappointing; we can get this performance by <em>not encoding the qubit at all<\/em>.<br \/>\nIndeed, <em>any<\/em> 2D topological stabilizer code has constant energy barrier [<a href=\"http:\/\/arxiv.org\/abs\/0810.1983\">0810.1983<\/a>], which implies that relaxation to thermal equilibrium occurs at a rate independent of system size $latex L$.<br \/>\nThis no-go result is daunting, but in higher dimensions we can still achieve some nontrivial topological protection. Here\u2019s what we want: A topological stabilizer codes in a D-dimensional array with commuting Pauli stabilizers $latex G_a$, each with support of size O(1), and with each qubit acted on by O(1) stabilizers. The ground space of the corresponding Hamiltonian is the set of states that are +1 eigenvectors of every stabilizer. We would like the distance to grow at least linearly with lattice size; such models have intrinsic robustness.<br \/>\nFortunately, there is a shining example of a spatially local stabilizer code with all the self-correcting properties we want. Unfortunately, it requires D=4; it is the 4-d toric code. Here is its energy landscape.<br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-bravyi-4d-landscape.jpg?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-5775\" title=\"Energy landscape of the 4-D toric code\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-bravyi-4d-landscape-1024x730.jpg?resize=525%2C374&#038;ssl=1\" alt=\"\" width=\"525\" height=\"374\" \/><\/a><br \/>\nDefects are loops around error membranes, and any sequence of single-qubit Pauli errors mapping a ground state to an orthogonal one must create roughly $latex L$ defects at some step. Thus the memory time is exponential in $latex L$ for low $latex T$ [<a href=\"http:\/\/arxiv.org\/abs\/0811.0033\">0811.0033<\/a>].<br \/>\nAll 3D stabilizer Hamiltonians studied so far have <em>constant<\/em> energy barrier. Indeed this is true of all 3D codes that are invariant under translation and \u201cscale invariance\u201d [<a href=\"http:\/\/arxiv.org\/abs\/1103.1885\">1103.1885<\/a>]. (Here, scale invariance means that the ground state degeneracy doesn\u2019t depend on the size of the lattice.) Haah\u2019s breakthrough <a href=\"http:\/\/arxiv.org\/abs\/1101.1962\">1101.1962<\/a> is the first example of a 3-d stabilizer code without string-like logical operators. In his model, defects cannot move very far.<br \/>\nOur results give self-correcting properties of any topological stabilizer codes that obey the no-strings rule, e.g. Haah\u2019s codes.<\/p>\n<ol>\n<li>Their energy barrier is $latex Omega(log L)$<\/li>\n<li>They have partial self correction. Specifically, the Arrhenius law together with the logarithmic energy barrier gives lifetime $latex L^{cbeta}$, for any inverse temperature $latex beta&gt;0$, and for $latex Lleq L^*$ for some critical value $latex L^*sim e^{beta\/3}$. See figure:<\/li>\n<\/ol>\n<p>This lifetime reaches a maximum value (as a function of L) when L is proportional to $latex e^{beta\/3}$. Thus, choosing L optimally results in a memory lifetime of $latex T_{mathrm{mem}} sim e^{cbeta^3\/3}$.<br \/>\nOne example of a code with no string-like errors is the 3-d cubic code of Haah. It is obtained by placing two qubits at each site of a cubic lattice and tiling the following stabilizer generators. See figure (from the Bravyi-Haah paper).<br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-Haah-code.png?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-5776\" title=\"stabilizers for Haah's 3-d code\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-Haah-code.png?resize=229%2C115&#038;ssl=1\" alt=\"\" width=\"229\" height=\"115\" \/><\/a><br \/>\n<strong>The noise model:<\/strong> One can get a Markovian noise model by considering the Davies weak coupling limit, where the Lindblad operator $latex A_{k,omega}$ transfers energy $latex omega$ from the system to the bath. The spectral density $latex r(omega)$ obeys detailed balance: $latex r(omega) = exp(-beta omega) r(omega)$. Note that this is the only place where temperature enters.<br \/>\n<strong>The decoder:<\/strong> After measuring the syndrome, the correction operator is found using ideas from Jim Harrington\u2019s thesis [2004; I can\u2019t find it online] and the Renormalization Group decoder from <a href=\"http:\/\/arxiv.org\/abs\/1006.1362\">1006.1362<\/a>. See photo.<br \/>\nThe strategy is to:<\/p>\n<ol>\n<li>Find connected clusters<\/li>\n<li>For each connected cluster C find the min enclosing box and try to annihilate C by an operator acting within the box.<\/li>\n<li>Stop if no defects left.<\/li>\n<li>If some defects are left, then increase the length by factor of 2 and repeat.<\/li>\n<\/ol>\n<p>The RG decoder stops when all defects have been annihilated or when length reaches the lattice size, in which case we have failure. We can also have failure when all defects have been annihilated but the system has been returned to a wrong ground state.<br \/>\nThe RG decoder can be implemented in time poly(L) (and can be parallelized to run in time log(L)).<br \/>\nWe can define the memory time as follows. We time evolve according to the master equation given the local quantum jump Lindblad operators. After some time $latex t$, we run the decoder. Then we check the worst-case (over all initial states) of the trace distance between the initial and final state.<br \/>\n$latex | mathcal{D}(rho(t)) &#8211; rho(0) |_1 le O(t) N 2^k exp(-beta m) (1+exp(-beta))^N.$<br \/>\nHere N is the number of physical qubits, k is the number of logical qubits, t is the time, $latex beta$ is inverse temperature and m is the number of errors that can be corrected by the decoder (or more precisely, the height of the energy barrier.) Essentially, there is a temperature\/energy factor that competes with an entropic factor.<br \/>\nAnalyzing it we find that the memory time grows greater than $latex L^(c beta)$ as long as $latex L le exp(beta\/3)$<br \/>\nThis gives large but finite lower bound on memory time at a fixed temperature.<br \/>\n<strong>Is it optimal?<\/strong> Maybe the analysis is pessimistic and the Haah code actually has an exponentially large lifetime? Maybe growing L can result in unbounded memory time when $latex beta$ is sufficiently large? The authors carried out a Monte Carlo simulation of Haah code with only X errors to test these theories. It used 1000 days of CPU time on IBM\u2019s Blue Gene.<br \/>\nThe plots are qualitatively similar to the lower bounds on memory time. In particular, they show that the maximum memory time scales quadratically with $latex beta$ and that the memory time for fixed temperature increases with $latex L$ and then starts to decrease.<br \/>\n<strong>How do the proofs go?<\/strong><br \/>\nIdea 1: The no-strings rule implies localization of errors. That is, any error E can be written as $latex E=E_{loc} cdot G$ where G is a stabilizer and $latex E_{loc}$ is supported on at most 2 neighborhoods.<br \/>\nIn order for the accumulated error to have large weight at least one intermediate syndrome must be nonsparse, with one pair of defects within distance a of each other.<br \/>\nIdea 2: Uses scale invariance of the nostrings rule.<br \/>\nDefine sparseness and denseness at different scales. A syndrome which is dense at p consecutive scales will include a cluster of p defects.<br \/>\nShow that to implement a logical operation, at least one intermediate syndrome must be dense at roughly log(L) differnet scales.<\/p>\n<h3 style=\"text-align: center\"><u>Mark Wilde<\/u>,<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#FHSSW\">Advances in classical communication for network quantum information theory<\/a>, based on <a href=\"http:\/\/arxiv.org\/abs\/1109.0802\">1109.0802<\/a> and <a href=\"http:\/\/arxiv.org\/abs\/1102.2624\">1102.2624<\/a><\/h3>\n<p>Consider the problem of an <em>interference channel<\/em>, with two senders and two receivers. This problem is hard enough in the &#8220;ccqq&#8221; case, meaning that the channel takes two classical inputs to a bipartite quantum output: i.e. $latex x_1,x_2 rightarrow rho_{x_1,x_2}^{B_1B_2}$. This is depicted here (figure taken from their paper).<br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-interference-channel.png?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-5786\" title=\"A ccqq interference channel\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-interference-channel.png?resize=316%2C148&#038;ssl=1\" alt=\"\" width=\"316\" height=\"148\" \/><\/a><br \/>\nAlong the way to understanding this, we&#8217;ll need to consider a multiple-access channel, with two senders and one receiver. The achievable rate region is in a sense an optimistic one, bounded by the obvious constraints:<\/p>\n<p style=\"text-align: center\">$latex R_1 leq I(X_1;B|X_2)$<br \/>\n$latex R_2 leq I(X_1;B|X_2)$<br \/>\n$latex R_1 + R_2 leq I(X_1X_2;B)$<\/p>\n<p>Why are these obvious? The first two constraints are what we would obtain if the receiver has successfully decoded one message, and both parties get to use that information to design a joint encoder-decoder pair. The last constraint is what we would get if the two senders could work together, apart from the requirement that their average input to the channel should be product across the two senders. Thus, these are natural upper buonds. The nonobvious part is that this can be <em>achieved<\/em>.<br \/>\nThe coding theorem was obtained by Andreas Winter, when he was a grad student (<a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/9807019\">quant-ph\/9807019<\/a>). How does it work? First, we observe that the desired rate region can be obtained as a convex combination of the \u201ccorner points\u201d of the rate region: $latex (I(X_1;B), I(X_2;B|X_1))$ and $latex (I(X_1;B|X_2), I(X_2;B))$.<br \/>\nTo achieve one of these corner points, use the fact that the two senders are sending uncorrelated inputs, so we can average over the inputs of one sender (say $latex X_2$) to obtain an effective single-sender\/single-receiver channel, for which coding is possible at rate $Iatex I(X_1;B)$. Of course, measurement is potentially destructive, but since we are operating in the regime of very low error, we can use the fact that measurements with nearly certain outcomes cause very little damage (the infamous \u201c<a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/9807019\">Tender Measurement Lemma<\/a>\u201d). (Ok, it\u2019s mostly called the \u201cgentle measurement lemma\u201d but I like \u201ctender.\u201d) Thus, the decoder obtains $latex X_1$ and, conditioned on it, can decode $latex X_2$. (Clearly the sender knows $latex X_1$ as well. Note that this is no longer true for the multiple-sender case.)<br \/>\n<span style=\"text-decoration: underline\">Sequential Decoding:<\/span>. To do decoding, we need to use the typical projector for the average state, as well as the conditionally typical projectors for codewords. The HSW idea is to use the pretty good measurement. The idea of sequential decoding is, in analogy with the classical idea, is to check each codeword sequentially. It works out pretty similarly, with the measurement operators in both cases being lightly distorted versions of the conditionally typical projectors.<br \/>\nThe crucial lemma making this possible is from <a href=\"http:\/\/arxiv.org\/abs\/1109.0802\">1109.0802<\/a>, by Pranab Sen. He calls it a \u201cnoncommutative union bound\u201d. The statement is that<\/p>\n<p style=\"text-align: center\">$latex 1- mathrm{tr}(Pi_1cdotsPi_n rho) le 2 sqrt{sum_{i=1}^n mathrm{tr}((1-Pi_i) rho)}$<\/p>\n<p>Successive decoding: In general, we\u2019d like to analyze these problems of multipartite information theory using the traditional single sender-singler receiver tools, like HSW. Since each individual code works with exponentially small error, and the gentle-measurement Lemma states that decoding causes exponentially small damage, we should be able to compose several protocols without much trouble. The only subtlety is that the typical projectors don\u2019t commute, which is where the noncommutative union bound comes in. We apply it to the following \u201cintersection projectors.\u201d<br \/>\n$latex tildePi_{x_1^n, x_2^n} leq Pi Pi_{x_1^n} Pi_{x_1^n, x_2^n} Pi_{x_1^n} Pi$<br \/>\nMost important open problem: find a general three-sender quantum simultaneous decoder. Solving this should hopefully yield the insights required to handle an unlimited number of senders.<\/p>\n<h3 style=\"text-align: center\"><u>Nilanjana Datta<\/u>,<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#SSY\">Quantum rate distortion, reverse Shannon theorems and source channel separation<\/a>,<br \/>\njoint work with Min-Hsiu Hsieh, Mark Wilde, based on <a href=\"http:\/\/arxiv.org\/abs\/1108.4940\">1108.4940<\/a>.<\/h3>\n<p>Compression and transmission: the source coding and noisy coding theorems of Shannon. The fundamental limit on compression is the entropy.<br \/>\nLossless compression is often too stringent a condition. For example, consider jpg compression, which gives faithful images (to a human eye) despite throwing away lots of bits of information. We consider Rate Distortion Theory, i.e. the theory of lossy data compression. We are interested in the maximum rate of compression given a fixed maximum amount of distortion. Define the rate distortion function:<br \/>\n$latex R_c(D) =$ minimum classical asymptotic cost for sending many copies of state $rho$ with per-copy distortion $latex leq D$, where $latex c$ is for classical.<br \/>\nFor a fixed value of the distortion function $latex 0le D &lt; 1$, we work in the storage setting. Define a quantum rate distortion function in the asymptotic limit.<br \/>\nBarnum (in <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/9806065\">quant-ph\/9806065<\/a>) proved the first lower bound on the quantum rate distortion function:<br \/>\n$latex R_q(D) ge min_{S_rho(N,D)} I_c(rho,N)$<br \/>\nwhere the term on the right is the coherent information and $latex S_rho(N,D) = {N:mathrm{CPTP} : d(rho,N)le D}.$ But, the coherent information can be negative, so it can\u2019t be tight lower bound in all cases.<br \/>\nNow move to the communication setting. Can define the entanglement-assisted quantum rate distortion function. They can find a single-letter formula for it, given by $latex min_{S_rho(N,D)} frac12 I(R:B)_omega$. (In terms of the quantum mutual information.) This is both a lower bound and it is achievable by using quantum state splitting. Alternatively, the achievability follows from the quantum reverse Shannon theorem (QRST).<br \/>\nUnassisted quantum rate distortion function can also be found by the using the QRST. Need to use a regularized formula.<\/p>\n<h3 style=\"text-align: center\"><u>Jon Yard<\/u>,<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#SSY\">Quantum communication with Gaussian channels of zero quantum capacity<\/a>,<br \/>\njoint work with Graeme Smith and John A. Smolin, based on <a href=\"http:\/\/arxiv.org\/abs\/1102.4580\">1102.4580<\/a>.<\/h3>\n<p>The title on arxiv.org is \u201cGaussian bosonic synergy: quantum communication via realistic channels of zero quantum capacity\u201d. Realistic? Synergy?? Think about this, kids, before you go off to work in industrial research.<br \/>\nThis paper concerns the fascinating topic of channels with zero quantum capacity. For zero classical capacity, these channels are simple to describe. Here is one example.<br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-cut-wire.png?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-5782\" title=\"Zero capacity channel\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-cut-wire-300x199.png?resize=300%2C199&#038;ssl=1\" alt=\"\" width=\"300\" height=\"199\" \/><\/a><br \/>\nBut zero quantum capacity occurs even for channels whose output depends on the input. For example, consider the completely dephasing channel, aka a classical channel. Obviously this has no classical capacity. The fact that this channel has zero capacity is both because it is anti-degradable (meaning that Eve\u2019s output can simulate Bob\u2019s; this implies 0-capacity by no-cloning) <em>and<\/em> because it is PPT, meaning it maps every input state to a PPT output state, or equivalently, its Jamiolkowski state (obtained by feeding in half of a maximally entangled state) is PPT. Sadly, these two conditions are still pretty much our only examples of zero-capacity channels. See <a href=\"https:\/\/dabacon.org\/pontiff\/?p=5568\">a previous talk of Graeme\u2019s<\/a> for a framework that could potentially expand this set of conditions.<br \/>\nLet\u2019s talk a bit more about those two examples. The anti-degradable channels are intuitively those that give more to Eve than Bob. So erasure channels with erasure rate &gt;50% count, attenuation channels (for photons; can be modeled by a beamsplitter) and certain depolarizing channels (using an argument due to Brandao, Oppenheim, and Strelchuk <a href=\"http:\/\/arxiv.org\/abs\/1107.4385\">http:\/\/arxiv.org\/abs\/1107.4385<\/a>). On the other hand, PPT channels are at least easy to calculate, and include some channels with zero private capacity.<br \/>\nThe general question of characterizing zero-capacity channels is a very interesting one, and one that it\u2019s not clear if we are up to. But here is a nice specific version of the problem. The capacity of the depolarizing channel drops to zero at noise rate $latex p^*$, where $latex 0.2552 leq p^* leq 1\/3$. What is $latex p^*$???<br \/>\nA good quote from Jon.<\/p>\n<blockquote><p>Superactivation demonstrates that asking about the capacity of a quantum channel is like asking about a person\u2019s IQ: one number isn\u2019t enough to capture everything you might want to know.<\/p><\/blockquote>\n<p><span style=\"text-decoration: underline\">The main result<\/span>: There exist Gaussian channels, each with zero quantum capacity, that can be combined to achieve nonzero capacity. These channels are more or less within the reach of current experiments (not sure about all the decoding), requiring about 10dB of squeezing and about 60 photons\/channel. This is interesting in part because Gaussian channels seemed to be so well-behaved! For example, there is no NPT bound entanglement in Gaussian channels.<br \/>\n<span style=\"text-decoration: underline\">Infinite dimensions:<\/span> This paper works in infinite dimensions, which many in our field are inherently uncomfortable with, and others are inexplicably drawn to. Like representation theory, or complexity classes with lots of boldface capital letters, the presence of phrases like \u201cQuasi-free maps on the CCR algebra\u201d can be either a good or a bad thing, depending on your perspective.<br \/>\nQ: For those of us who of prefer finite dimensions, can we truncate the Hilbert space, perhaps by taking advantage of a constraint on the total number of photons?<br \/>\nA: In principle yes, but then the channels are no longer Gaussian, so our analysis doesn\u2019t work, and in general, Gaussian things are easier to analyze, so that is a sense in which infinite dimensions can actually be simpler to deal with.<\/p>\n<h3 style=\"text-align: center\"><u>Runyao Duan<\/u>,<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#YDX\">Bounds on the distance between a unital quantum channel and the convex hull of unitary channels, with applications to the asymptotic quantum Birkhoff conjecture<\/a>,<br \/>\njoint work with Nengkun Yu and Quanhua Xu.<\/h3>\n<p>The story behind this work starts with the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Doubly_stochastic_matrix\">Birkhoff-von Neumann theorem<\/a> (often called Birkhoff\u2019s theorem) which states that doubly stochastic matrices (matrices with nonnegative entries and all rows and columns summing to one) are convex combinations of permutations. An analogous claim for quantum channels is that unital channels (i.e. mapping the maximally mixed state to itself) are mixtures of unitaries. However, this claim <a href=\"http:\/\/xxx.lanl.gov\/abs\/quant-ph\/0209025\">is false<\/a>. More recently, Smolin, Verstraete and Winter conjectured in <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0505038\">quant-ph\/0505038<\/a> that many copes of a unital channel should asymptotically approach convex combinations of unitaries. (The rest of their paper contained evidence suggestive of this claim, and in general has nice results that are an important precursor of the merging revolution in quantum information theory.) This conjecture was called the Asymptotic Quantum Birkhoff Conjecture (AQBC) and was discussed on Werner\u2019s <a href=\"http:\/\/qig.itp.uni-hannover.de\/qiproblems\/Asymptotic_Version_of_Birkhoff's_Theorem\">open problems page<\/a>. A few years later, it was shown to hold for everyone\u2019s favorite unital-but-not-mixture-of-unitaries channel (What\u2019s that? The one with Jamiolkowski state equal to the maximally mixed state on the antisymmetic subspace of $latex mathbb{C}^3 otimes mathbb{C}^3$ of course!) by <a href=\"http:\/\/arxiv.org\/abs\/0806.2820\">Mendl and Wolf<\/a>.<br \/>\nSadly, the AQBC is also false, as <a href=\"http:\/\/arxiv.org\/abs\/1009.0778\">Haagerup and Musat recently proved<\/a>. However, their paper uses von Neumann algebras, and their abstact begins<\/p>\n<blockquote><p>We study factorization and dilation properties of Markov maps between von Neumann algebras equipped with normal faithful states, i.e., completely positive unital maps which preserve the given states and also intertwine their automorphism groups.<br \/>\n&#8230;<\/p><\/blockquote>\n<p>(Another approach, in preparation, is due to <a href=\"http:\/\/www.mathnet.ru\/php\/seminars.phtml?option_lang=eng&amp;presentid=1223\">Ostrev, Oza and Shor<\/a>.)<br \/>\nThis raises a new open question: is there a nice simple proof that finite-dimension-loving quantum information people can follow without having to learn any new math? (Note: probably we <em>should<\/em> learn more about von Neumann algebras. Just like we should make our own pizza from scratch. But if there\u2019s a good pizzeria that delivers, I\u2019d still like to hear about it.)<br \/>\nThis result delivers, by disproving the AQBC with an elegant, intuitive proof. The key technical lemma is the kind of tool that I can imagine being useful elsewhere. It states that, for the problem of distinguishing two convex sets of quantum channels, there exists a single measurement strategy that works well in the worst case. This builds on a similar lemma (due to Gutoski-Watrous \u201804 and Jain \u201805) is just minimax applied to state discrimination: If we want to distinguish two convex sets of density matrices, then there exist a single measurement that works well in the worst case. How well? Its performance is at least as good as what we get by choosing the worst pair of states from the two convex sets, and then choosing the optimal measurement based on these.<br \/>\nThis paper extends this result to sets of quantum channels. The optimal distinguishability of a pair of quantum channels is given by their diamond-norm distance, which is simply the largest trace distance possible between their outputs when applied to one half of some (possibly entangled) state. So when you want to distinguish <em>convex sets<\/em> of channels, then they use minimax again to show that there exists a measurement strategy (this time consisting both of an input state to prepare AND a measurement on the output) that works well for any worst-case choice of input channels. This set of measurement strategies sounds potentially non-convex; however, <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0411077\">Watrous\u2019s SDP characterization<\/a> of the diamond norm shows that measurement strategies form a convex set, so everything works out.<br \/>\nNext, this paper applies this to find ways to estimate the distance between a unital channel and the convex hull of unitary channels. To do this, they use a type of channel which they call a Schur channel (more commonly known as Schur multipliers): given a matrix $latex A$, define the channel $latex Phi_A(X) = A circ X$. Such channels are called Schur channels.<br \/>\n<span style=\"text-decoration: underline\">Thm:<\/span>TFAE:<\/p>\n<ol>\n<li>$latex Phi_A$ is a Schur channel<\/li>\n<li>$latex Phi_A$ is unital with diagonal Kraus operators<\/li>\n<li>A is positive and $latex a_{k,k}=1$ for each $latex 1leq kleq d$.<\/li>\n<\/ol>\n<p>Using Schur channels, and their discrimination lemma, they are able to make short work of the AQBC. Their next result is that any Schur channel for which conv(Kraus operators of the channel) doesn\u2019t include any unitary is a counterexample to AQBC. This will follow from the fact that the closest random-unitary channel to a Schur channel can be taken (up to factor of 2 in the distance) to be a mixture of diagonal unitaries.<br \/>\nThis work doesn\u2019t tell us <em>all<\/em> of the AQBC-violating channels, but it does describe a large class of them.<br \/>\nThey also mentioned connections to Grothendieck\u2019s inequality (work in progress) and Connes\u2019 embedding problem. Connections to these two topics were also mentioned in <a href=\"http:\/\/arxiv.org\/abs\/1009.0778\">Haagerup and Musat\u2019s work<\/a>.<\/p>\n<h3 style=\"text-align: center\"><u>Markus Greiner<\/u>, Quantum Magnetism with Ultracold Atoms &#8211; A Microscopic View of Artificial Quantum Matter.<\/h3>\n<p>Some motivation: many condensed matter models can\u2019t be tested with current experiments. Even in simple models it is difficult to calculate basic quantities. Want to build new materials. The idea: use ultracold atoms in optical lattices to simulate condensed matter models. The dream of a quantum simulator is to build a special purpose device which is still useful before full-scale quantum computers are built.<br \/>\nHe mentioned that quantum simulators are robust to errors&#8230; I believe that this is an open question which theorists should be working on.<br \/>\nHow do we cool the atoms? First trap the atoms with lasers. For weakly interacting quantum gases, we can form a BEC where all of the particles can be well described by a single wavefunction. But for strongly correlated quantum gases there are interactions which preclude such a description. In an optical lattice, we have atoms interacting strongly on lattice sites. There is new physics here: for example, superfluid to Mott insulator transition. We can use the optical lattice to make synthetic matter: we can simulate electrons in a crystal. With fermionic atoms, we can observe a fermionic superfluid and things like the BEC-BCS crossover. Bose-Hubbard models and high-T_c superconductivity could also be probed in these systems. Also, quantum magnets, low-dimensional materials, etc.<br \/>\nLet\u2019s focus on quantum magnetism. The main experimental tool is the quantum gas microscope. The goal is to take \u201cbottom-up\u201d tools like high-fidelity readout and local control from e.g. ion traps to more \u201ctop-down\u201d systems like optical lattices. (This is clearly related to scalability of quantum computation, ultimately.) The quantum gas microscope can image florescence from single atoms trapped in the lattice. To image this, they measure parity.<br \/>\nHe then showed a movie of atoms hopping around in a shallow lattice (in the classical regime, obviously). Very cool. Here is one still from the movie:<br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-Greiner-atoms.jpg?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-5785\" title=\"Atoms in an optical lattice\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/1-Greiner-atoms-300x225.jpg?resize=300%2C225&#038;ssl=1\" alt=\"\" width=\"300\" height=\"225\" \/><\/a><br \/>\nThe Hamiltonian for this system is a hopping term and an interaction term which penalizes multi-site occupations: a basic Bose-Hubbard model. If the hopping terms dominate, then it\u2019s a superfluid; if the on-site repulsion is dominant, we get a Mott insulator. They can get approximately unit site occupation number in the Mott insulator regime.<br \/>\nHow to measure interesting quantities, like correlation functions? Measure the site occupation number (mod 2) on neighboring sites. One can push this further and measure string order parameters. In the future, the hope is to measure more complicated things like Chern number.<br \/>\nThey observed the Lieb-Robinson light-cone of correlations.<br \/>\nAlgorithmic cooling: Suppose that you have random site occupations. Then you can try to \u201ccool\u201d the system by pushing some of the entropy into a subsystem and get a pure state on the rest. They use Rydberg blockade interactions to try to smooth the site occupation number fluctuations, and then transfer to a superfluid. The temperature is not usually constant in their system, but the entropy is consistently low.<br \/>\nQuantum magnetism: Ising spin models. First consider a chain with on-site longitudinal and transverse fields. There are two phases: paramagnetic and antiferromagnetic. By putting a Mott insulator in an electric field, we can observe this type of transition. We can try to extend these results to two dimensional frustrated spin systems. For example, dimerized states (four-state clock model). These models have spin liquid-like groundstates.<br \/>\nTowards a universal quantum computer: The toolbox includes low entropy initialization, single site readout, single site control, and interactions, all with high fidelity. So what does the future hold?<br \/>\nQ: What types of error are these quantum simulators robust to?<br \/>\nA: Temperature is one type of error. Thermalization might not always occur. Sometimes you want some disorder, since real systems also have disorder.<br \/>\nQ: When you say \u201cfidelity\u201d what do you mean by that?<br \/>\nA: We mean the Hamming distance between classical basis states when we talk about paramagnetic order. That is, our fidelity is 99% if a fraction 99\/100 of our spins are correctly aligned when we image the sample.<br \/>\nOne question that I didn\u2019t get to ask: what can they say about models which have degenerate ground states? I\u2019m obviously thinking about e.g. the toric code and other topologically ordered models. Can they prepare specific pure states in the ground state manifold?<\/p>\n<h3 style=\"text-align: center\"><u>Andr\u00e9 Chailloux<\/u>, <a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#CK\">Optimal Bounds for Quantum Bit Commitment<\/a>. Joint work with Iordanis Kerenidis, based on <a href=\"http:\/\/arxiv.org\/abs\/1102.1678\">1102.1678<\/a>.<\/h3>\n<p>Some basic primitives for secure transactions:<\/p>\n<ul>\n<li>OT<\/li>\n<li>Coin Flipping<\/li>\n<li><a href=\"https:\/\/en.wikipedia.org\/wiki\/Commitment_scheme\">Bit Commitment<\/a><\/li>\n<\/ul>\n<p>Unlike QKD, here there are only 2 players here who distrust each other.<br \/>\nLet Alice and Bob\u2019s optimal cheating probabilities be $latex P_A^*$ and $latex P_B^*$.<br \/>\nThe best known quantum protocol achieves<br \/>\n$latex max{P_A^*, P_B^*} =3\/4$ [Ambainis \u201801]<br \/>\nand the best known lower bound is<br \/>\n$latex max{P_A^*, P_B^*} geq 1\/sqrt{2}$ [Kitaev \u201803]<br \/>\nWe improve the bound to $latex max{P_A^*, P_B^*} geq 0.739$ and build QBC protocols that get arbitrarily close to this bound.<br \/>\nThe proof uses Mochon\u2019s weak coin flipping protocol.<\/p>\n<p style=\"padding-left: 30px\"><em>Gilles Brassard, Merkle Puzzles in a quantum world.<\/em> Joint work with Peter Hoyer, Kassem Kalach, Marc Kaplan, Sophie Laplante, Louis Salvail, based on <a href=\"http:\/\/arxiv.org\/abs\/1108.2316\">arXiv:1108.2316<\/a><\/p>\n<p>Ralph Merkle was the inventor of Public Key Crypto in 1974 before Diffie and Hellman (but after the classified discovery). Gilles began with a great quote from Merkle:<\/p>\n<blockquote><p>The human mind treats a new idea the same way as the human body treats a new protein.<\/p><\/blockquote>\n<p>See <a href=\"http:\/\/merkle.com\/1974\">Merkle\u2019s web site<\/a> for the amazing story of how he proposed public-key cryptography as a class project in grad school, and the professor described it as a \u201cmuddled terribly\u201d, and then he submitted it to JACM, which rejected it because it was \u201coutside of the mainstream of cryptographic thinking.\u201d<br \/>\n<span style=\"text-decoration: underline\">Key Distribution problem as imagined by Merkle:<\/span><br \/>\nAlice and Bob talk over an authenticated public channel. After doing so Alice and Bob should get a shared secret. Can only be computationally secure, but back then it was not even thought to be computationally possible.<br \/>\nMake the eavesdropper\u2019s effort grow as fast as possible compared to Alice and Bob\u2019s effort<br \/>\nWe will measure effort by query complexity to a random oracle. Use that model because it allows to prove a lower bound.<br \/>\n<span style=\"text-decoration: underline\">Merkle\u2019s 1974 idea:<\/span><br \/>\nChoose a hard-to-invert, but injective, function f that acts on $latex [N^2]$.<br \/>\nAlice chooses N random values of x and computes the values of f(x).<br \/>\nAlice sends all these values to Bob.<br \/>\nBob wants to invert one of them.<br \/>\nHe keeps trying his own random x\u2019s until he gets a collision; call it s.<br \/>\nBob sends f(s) to Alice and Alice finds secret s.<br \/>\nSo after N queries by Alice and N by Bob they get the same s<br \/>\nEavesdropper has seen only N useless f(x) values and one useful f(s) which he has to try $latex N^2\/2$ queries to find s, compared to N for Alice and Bob<br \/>\nOk, so this is only a quadratic separation, but we don\u2019t actually know that DIffie-Hellman is any better. The lower bound was proven by Narak and Mahmoody 08.<br \/>\nWhat about the quantum world, where the eavesdropper is quantum? Alice and Bob could be quantum but don\u2019t need to be. The channel remains purely classical, so A&amp;B can\u2019t do QKD<br \/>\nIf Eve is quantum, and Alice and Bob are classical, then Eve can find the key in O(N) time, which is the same amount of effort that Alice and Bob expend (albeit quantum and not classical). Unfortunately, this breaks the Merkle scheme completely.<br \/>\nIt can it be repaired by allowing A&amp;B to be quantum, or even keeping them classical<br \/>\nOne option is to keep Alice classical, use a space of size $latex N^3$, and allow Bob to use BBHT to find one preimage in O(N) quantum queries. Eve using Grover needs $N^{3\/2}$ so Alice and Bob get an advantage over Eve.<br \/>\nAn improved protocol is for Bob to find two preimages in O(N) quantum queries.<br \/>\nBob sends Alice the bitwise XOR of f(s) and f(s\u2019). Given this Alice uses a table to find the secret.<br \/>\nThis gives effort $latex O(N^{5\/3})$ for Eve and O(N) for Alice and Bob. (It\u2019s not trivial to show Eve can do this well, and even less to show this is optimal)<br \/>\nThe proof involves something similar to Ambainis\u2019 element distinctness algorithm with a quantum walk on the Johnson graph. A crucial observation relates to the compositon of our variant of element distinctness on N elements with searching each function value in a set of size $latex N^2$. This involves a new composition theorem for non-boolean functions.<br \/>\nWhat if Bob is classical?<br \/>\nLike before, there are two functions f and t, each with N^2 points.<br \/>\nBob finds two preimages as in Merkle. The rest is the same: bitwise XOR.<br \/>\nHow much time does quantum eavesdropper need? $latex N^{7\/6}$.<br \/>\nAlso, there is a sequence of protocols for which best attack we can find tends to $latex N^2$ in the limit. See paper for details. \ud83d\ude42<\/p>\n<h3 style=\"text-align: center\"><u>Salman Beigi<\/u>,<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~qip2012\/abstract.php#BeK\">Simplified instantaneous non-local quantum computation with applications to position-based cryptography<\/a>, joint work with Robert Koenig.<\/h3>\n<p>Position based crypto (ie authentication of position) by distance bounding techniques<br \/>\nChallenge response protocol<br \/>\nV1 &#8212;&#8212;&#8212;-P&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;V2<br \/>\nVerifies send challenges timed for synch arrival at r0, position of verifier<br \/>\nhonest prover computes and sends answers<br \/>\nChallenge\/response susceptible to cheating by cheating provers each copying and sending the challenge. This shows why classical protocols are insecure.<br \/>\nKent Munro Spiller 2011 with in- and security proofs<br \/>\nCheating strategy uses teleportation<br \/>\nIn general a quantum protocol is a bipartite unitary.<br \/>\nAn attack is a combination of local operations, shared entanglement and one round of communication sufficient to simulate it.<br \/>\nVaidman 03 achievse this with double exponential cost in ebits for recursive teleportation<br \/>\nBuhrman et al 09 key insights:<br \/>\nSimplified protocol uses $latex O(2^n)$ ebits<br \/>\nUses port-based teleportation, due to Ishizaka and Hiroshima (PRL 2009 2009)<br \/>\nShare K maxly entangled states<br \/>\nHighly complicated measurement.<br \/>\nRecovery is very simple. Alice discards all but one of her halves of entangled states<br \/>\nNew protocol for instantaneous measurementt<br \/>\n<span style=\"text-decoration: underline\">Position based crypto: cheating landscape<\/span><br \/>\nVaidman -doubly exponential can always cheat<br \/>\nPort-based single exponential, suff for cheating<br \/>\nIf restricted to classical communication and only linear entanglement can get exponential soundness for Position Based Crypto<br \/>\nIf cheaters are allowed q commun and linear entanglement can get only constant<\/p>\n<h3 style=\"text-align: center\"><u>Florian Speelman<\/u>, Garden Hose Game and Application to Position-Based Quantum Cryptography. Joint work with Harry Buhrman, Serge Fehr, Christian Schaffner, based on <a href=\"http:\/\/arxiv.org\/abs\/1109.2563\">arXiv:1109.2563<\/a>.<\/h3>\n<p><span style=\"text-decoration: underline\">Position verification applications:<\/span><\/p>\n<ul>\n<li>launching nuclear missiles (against enemies not equipped with FTL neutrinos, of course)<\/li>\n<li>preventing prank calls of ambulances, SWAT teams or pizzas<\/li>\n<li>communicating with right wifi router<\/li>\n<li>computing international roaming fees properly<\/li>\n<li>for when you want to know you are talking to S. Korea and not N. Korea<\/li>\n<li>proving that \u201clive\u201d-blogging is actually occurring from the conference location, and is not just the result of typing out our previous opinions of the results.<\/li>\n<\/ul>\n<p>On the positive side, can show security if promised there is no shared entanglement,<br \/>\nShow a specific class of schemes to obtain a tradeoff. Increased classical communication for honest players<br \/>\nExample: classically secure but broken by teleportation<br \/>\nVerifier 0 sends psi to Prover<br \/>\nVerifier 1 sends bit to prover<br \/>\nProver routes psi left or right according to bit<br \/>\nInstead of one bit use a function<br \/>\nV0 sends state and n-bit string x to prover<br \/>\nV1 sends an n bit string y to prover<br \/>\nProver computes fn f(x,y) and sends psi to V0 or V1 depending on outcome<br \/>\nNice thing is that it\u2019s no harder for honest prover<br \/>\nBut for cheating provers, more entanglement is needed. Can do it with $latex 2times 2^n$ ebits<br \/>\nGarden hose model<br \/>\nA discrete way of looking at attacks of this sort<br \/>\nAlice and Bob share s pipes between them. Alice has a water tap<br \/>\nAlice use hoses like Enigma plug cords to connect their ends ends of hoses in some pattern<br \/>\nf(x,y) is whether water comes out on left or right<br \/>\nGardenhose complexity of a function is number of pipes needed to compute the fn<br \/>\nEvery piece of hose is a teleportation<br \/>\nCan use GH model to prove upper bounds<br \/>\nEquality 2n+O(log n)<br \/>\nInner Product 4n<br \/>\nMajority n^2<br \/>\nIf f is in logspace, then GH(f) is poly(n).<br \/>\nBut there do exist fns with exponential GH complexity.<br \/>\nCan prove lower bounds for explicit functions, but best we can prove are linear.<\/p>\n<h2>Poster Session 1<\/h2>\n<p>The poster session had tons of great posters that we will not remotely do justice. Instead, here are some photos from a few of them. Try to guess which ones. \ud83d\ude42<br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/P1200922.jpg?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-5790\" title=\"cat in computer\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/P1200922-1024x861.jpg?resize=525%2C441&#038;ssl=1\" alt=\"\" width=\"525\" height=\"441\" \/><\/a><br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/P1200926.jpg?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-5792\" title=\"Zero error with feedback\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/P1200926-768x1024.jpg?resize=525%2C700&#038;ssl=1\" alt=\"\" width=\"525\" height=\"700\" \/><\/a><br \/>\n<a href=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/P1200924.jpg?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-5791\" title=\"Rank-1 quantum chromatic number\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2011\/12\/P1200924-1024x742.jpg?resize=525%2C380&#038;ssl=1\" alt=\"\" width=\"525\" height=\"380\" \/><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Sergey Bravyi, Topological qubits: stability against thermal noise. Joint work with Jeongwan Haah, based on arXiv:1105.4159. Sergey opened his talk with an image of a bagel in a Solfatara landscape, meant to evoke topogically-protected qubits (get it? a bagel?) in a thermal (steam) bath. The question is whether we can engineer an energy landscape with &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/dabacon.org\/pontiff\/2011\/12\/13\/qip-2012-day-1\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;QIP 2012 Day 1&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[22,40,63],"tags":[],"class_list":["post-5728","post","type-post","status-publish","format-standard","hentry","category-conferences","category-liveblogging","category-quantum"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5728","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=5728"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5728\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=5728"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=5728"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=5728"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}