{"id":6496,"date":"2012-09-19T18:10:57","date_gmt":"2012-09-20T01:10:57","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=6496"},"modified":"2012-09-19T18:10:57","modified_gmt":"2012-09-20T01:10:57","slug":"lucky-13-paper-dance","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2012\/09\/19\/lucky-13-paper-dance\/","title":{"rendered":"Lucky 13 paper dance!"},"content":{"rendered":"<p>Having recently rediscovered arxiv.org\/submit, I thought I&#8217;d mention a few papers to come out of team UW.<br \/>\nTwo particularly exciting single-author papers are from students here.<br \/>\nKamil Michnicki has developed a 3-d stabilizer code on $latex n$ qubits with an energy barrier that scales as $latex n^{2\/9}$. By contrast, such a result is <a href=\"http:\/\/arxiv.org\/abs\/0810.1983\">impossible in 2-d<\/a>, and the best energy barrier previously obtained in 3-d was $latex O(log n)$ from Haah&#8217;s breakthrough <a href=\"http:\/\/arxiv.org\/abs\/1101.1962\">cubic code<\/a>. Sadly, this code appears not to have the thermal stability properties of <a href=\"http:\/\/arxiv.org\/abs\/0811.0033\">the 4-d toric code<\/a>, but it nevertheless is an exciting step towards a self-correcting quantum memory.<\/p>\n<blockquote><p><strong>1208.3496:<\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1208.3496\">3-d quantum stabilizer codes with a power law energy barrier<\/a><br \/>\nKamil Michnicki<\/p><\/blockquote>\n<p><a href=\"http:\/\/homes.cs.washington.edu\/~djr\/\">David Rosenbaum<\/a> has written a mostly classical algorithms paper about the old problem of group isomorphism: given two groups $latex G,H$ specified by their multiplication tables, determine whether they are isomorphic. The problem reduces to graph isomorphism, but may be strictly easier. Since any group with $latex |G|=n$ has a generating set of size $latex leq log_2(n)$, it follows that the problem can be solved in time $latex n^{log(n)+O(1)}$. While faster algorithm have been given in many special cases, the trivial upper bound of $latex n^{log(n)}$ has resisted attack for decades. See <a href=\"https:\/\/rjlipton.wordpress.com\/2011\/11\/07\/the-group-isomorphism-problem-a-possible-polymath-problem\/\">G\u00f6del&#8217;s Lost Letter<\/a> for some discussion. The particular classes of groups considered to be hardest have been the nilpotent (or more generally solvable) groups, since paradoxically the rigidity of highly non-abelian groups (e.g. simple groups) makes them easier to address. David found a polynomial speedup for solvable groups, thus making the first progress on this problem since the initial $latex n^{log(n)}$ algorithms.<\/p>\n<blockquote><p><strong>1205.0642<\/strong>: <a href=\"http:\/\/arxiv.org\/abs\/1205.0642\">Breaking the $latex n^{log n}$ Barrier for Solvable-Group Isomorphism<\/a><br \/>\nDavid Rosenbaum<\/p><\/blockquote>\n<p>Lukas Svec (together with collaborators) also has a nice way of improving the Gottesman-Knill simulations that have been <a href=\"http:\/\/dspace.mit.edu\/handle\/1721.1\/44407\">so effective<\/a> in estimating FTQC thresholds. Gottesman-Knill allows mixtures of Clifford unitaries to be simulated classically, which seems as thought it should be only be effective for simulating unital noise. However, throwing away a qubit and replacing it with the $latex |0rangle$ state can also be encompassed within the Gottesman-Knill approach. This insight allows them to give much better simulations of amplitude-damping noise than any previous approach.<\/p>\n<blockquote><p><strong>1207.0046:<\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1207.0046\">Approximation of real error channels by Clifford channels and Pauli measurements<\/a><br \/>\nMauricio Guti\u00e9rrez, Lukas Svec, Alexander Vargo, Kenneth R. Brown<\/p><\/blockquote>\n<p>There have also been two papers about the possibilities of two dimensions. David has a paper explaining how a general circuit with arbitrary two-qubit interactions on $latex n$ qubits can be simulated in a 2-d architecture by using $latex n^2$ qubits. If only $latex k$ gates happen per time step, then $latex nk$ qubits suffice. The key trick is to use classical control to perform teleportation chains, an idea of whose provenance I&#8217;m unclear on, but which is based in part on <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0010033\">MBQC<\/a> and in part on <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0205133\">a remarkable paper of Terhal and Divincenzo<\/a>.<\/p>\n<blockquote><p><strong>1205.0036:<\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1205.0036\">Optimal Quantum Circuits for Nearest-Neighbor Architectures<\/a><br \/>\nDavid Rosenbaum<\/p><\/blockquote>\n<p>Examining particular algorithms can enable more dramatic speedups, and by examining Shor&#8217;s algorithm, Paul Pham was able to reduce the depth to polylogarithmic, surprisingly finding an improved implementation of the most well-studied of quantum algorithms.<\/p>\n<blockquote><p><strong>1207.6655:<\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1207.6655\">A 2D Nearest-Neighbor Quantum Architecture for Factoring<\/a><br \/>\nPaul Pham, Krysta M. Svore<\/p><\/blockquote>\n<p>David and I also have a joint paper on an alternate oracle model in which one input to the oracle is supplied by the user and a second input is random noise. While in some cases (e.g. <a href=\"http:\/\/arxiv.org\/abs\/1202.1027\">a Grover oracle that misfires<\/a>) this does not lead to quantum advantages, we find that in other cases, quantum computers can solve problems in a single query that classical computers cannot solve with <em>unlimited<\/em> queries. Along the way, we address the question of when some number of (quantum or classical) queries yield no useful information at all about the answer to an oracle problem.<\/p>\n<blockquote><p><strong>1111.1462:<\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1111.1462\">Uselessness for an Oracle Model with Internal Randomness<\/a><br \/>\nDavid Rosenbaum, Aram W. Harrow<\/p><\/blockquote>\n<p>My co-blogger Steve has also been active. Steve and his co-authors (does that make them my step-co-authors?) have written perhaps the definitive work on how to estimate approximately low-rank density matrices using a small number of measurements.<\/p>\n<blockquote><p><strong>1205.2300<\/strong>: <a href=\"http:\/\/arxiv.org\/abs\/1205.2300\"> Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators<\/a><br \/>\nSteven T. Flammia, David Gross, Yi-Kai Liu, Jens Eisert<\/p><\/blockquote>\n<p>Steve, together with Ghost Pontiff Dave and Dave&#8217;s former student Gregory Crosswhite have also posted<\/p>\n<blockquote><p><strong>1207.2769<\/strong>: <a href=\"http:\/\/arxiv.org\/abs\/1207.2769\">Adiabatic Quantum Transistors<\/a><br \/>\nDave Bacon, Steven T. Flammia, Gregory M. Crosswhite<\/p><\/blockquote>\n<p>This paper proposes a deeply innovative approach to quantum computing, in which one adiabatically transforms one simple spatially-local Hamiltonian to another. Unlike previous approaches, it seems to have a chance of having some compelling fault-tolerance properties, although analyzing this remains challenging.<br \/>\nSteve and I also have a brief note (<a href=\"http:\/\/arxiv.org\/abs\/1204.3404\">arXiv:1204.3404<\/a>) relevant to my ongoing debate with Gil Kalai (see <a href=\"https:\/\/rjlipton.wordpress.com\/2012\/09\/16\/quantum-repetition\/\">here<\/a>, <a href=\"https:\/\/rjlipton.wordpress.com\/2012\/06\/20\/can-you-hear-the-shape-of-a-quantum-computer\/\">here<\/a>, <a href=\"https:\/\/rjlipton.wordpress.com\/2012\/05\/12\/quantum-refutations-and-reproofs\/\">here<\/a>, <a href=\"https:\/\/rjlipton.wordpress.com\/2012\/03\/05\/the-quantum-super-pac\/\">here<\/a>, <a href=\"https:\/\/rjlipton.wordpress.com\/2012\/02\/15\/nature-does-not-conspire\/\">here<\/a>, <a href=\"https:\/\/rjlipton.wordpress.com\/2012\/02\/02\/quantum-groundhog-day\/\">here<\/a> or <a href=\"https:\/\/dabacon.org\/pontiff\/?p=6297\">here<\/a>) in which we point out counterexamples to one of Gil&#8217;s conjectures. <a href=\"https:\/\/rjlipton.wordpress.com\/2012\/05\/12\/quantum-refutations-and-reproofs\/\">This post<\/a> specifically contains more discussion of the issue.<br \/>\nFinally, I&#8217;ve been clearing out a lot of my unpublished backlog this year.<br \/>\nMy co-authors and I wrote a short paper explaining the main ideas behind the superactivation of zero-error capacity. The principle is similar to that found in all of the additivity violations based on random quantum channels: we choose a correlated distribution over channels $latex {cal N}_1, {cal N}_2 $ in a way that forces $latex {cal N}_1otimes {cal N}_2$ to have some desired behavior (e.g. when acting on a particular maximally entangled state). At the same time, apart from this constraint, the distribution is as random as possible. Hopefully we can then show that any single-copy use of $latex {cal N}_1$ or $latex {cal N}_2$ has low capacity, or in our case, <em>zero<\/em> zero-error capacity. In our case, there are a few twists, since we are talking about zero-error capacity, which is a fragile property more suited to algebraic geometry than the usual approximate techniques in information theory. On the other hand, this means that at many points we can show that properties hold with probability 1. The other nontrivial twist is that we have to show that not only $latex {cal N}_i$ has zero zero-error capacity (yeah, I know it&#8217;s a ridiculous expression) but $latex {cal N}_i^{otimes n}$ does for all $latex n$. This can be done with some more algebraic geometry (which is a fancy way of saying that the simultaneous zeroes of a set of polynomials has measure equal to either 0 or 1) as well as the fact that the property of being an <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/9808030\">unextendible product bases<\/a> is stable under tensor product.<\/p>\n<blockquote><p><strong>1109.0540:<\/strong><a href=\"http:\/\/arxiv.org\/abs\/1109.0540\">Entanglement can completely defeat quantum noise<\/a><br \/>\nJianxin Chen, Toby S. Cubitt, Aram W. Harrow, Graeme Smith<\/p><\/blockquote>\n<p>One paper that was a fun bridge-building exercise (with a nice <a href=\"http:\/\/blog.computationalcomplexity.org\/2012\/09\/two-quantum-annoucements-and-one.html\">shout out from Umesh Vazirani\/BILL GASARCH<\/a>) was a project with quantum information superstar Fernando Brand&atilde;o as well as a pack of classical CS theorists. My part of the paper involved connections between QMA(2) and optimizing polynomials over $latex mathbb{R}^n$. For example, if $latex a_1,ldots, a_m$ are the rows of a matrix $latex A$, then define $latex |A|_{2rightarrow 4} = max_{|x|_2=1} |Ax|_4 = max_{|x|_2=1} (sum_{i=1}^m |langle a_i, xrangle|^4)^{1\/4}$. Taking the fourth power we obtain the maximum energy attainable by product states under the Hamiltonian $latex H = sum_{i=1}^m a_i a_i^* otimes a_i a_i^*$. Thus, hardness results and algorithms can be ported in both directions. One natural algorithm is called &#8220;the Lasserre SDP hierarchy&#8221; classically and &#8220;optimizing over $latex k$-extendable states&#8221; quantumly, but in fact these are essentially the same thing (an observation dating back to <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0308032\">a 2003 paper of Doherty, Parrilo, and Spedalieri<\/a>). There is much more to the paper, but I&#8217;ll leave it at that for now.<\/p>\n<blockquote><p><strong>1205.4484: <\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1205.4484\">Hypercontractivity, Sum-of-Squares Proofs, and their Applications<\/a><br \/>\nBoaz Barak, Fernando G.S.L. Brand&atilde;o, Aram W. Harrow, Jonathan A. Kelner, David Steurer, Yuan Zhou<\/p><\/blockquote>\n<p>Another big collaboration taking me out of my usual areas was this paper on quantum architecture. Suppose that our quantum computer is comprised of many small nodes (say ion traps), connected by long-range links (say optical fibers), as has been <a href=\"http:\/\/arxiv.org\/abs\/1208.0391\">recently advocated<\/a>. This computer would not be stuck with a 2-d topology, but could be connected in any reasonably low-degree configuration. Our paper shows that a hypercube topology (among many other possibilities) is enough to simulate general quantum circuits. This enables parallelized versions of Grover search that finally (in my opinion) address <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0309123\">the problem raised by Grover and Rudolph<\/a> about the memory requirements for the &#8220;best&#8221; known collision and element-distinctness algorithms. As a result, we find space-time tradeoffs (assuming this hypercube topology) for collision and element distinctness of $latex ST=tilde O(sqrt N)$ and $latex ST=tilde O(N)$ respectively.<\/p>\n<blockquote><p><strong>1207.2307:<\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1207.2307\">Efficient Distributed Quantum Computing<\/a><br \/>\nRobert Beals, Stephen Brierley, Oliver Gray, Aram W. Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, Mark Stather<\/p><\/blockquote>\n<p>The next paper was on more familiar territory. Together with my former PhD student Richard Low (and building on the work of <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0701125\">Dahlsten, Oliveira and Plenio<\/a>), I proved that random quantum circuits are approximate unitary 2-designs in <a href=\"http:\/\/arxiv.org\/abs\/0802.1919\">0802.1919<\/a>. Later Fernando Brand&atilde;o and Michal Horodecki improved this to show that random quantum circuits are approximate unitary 3-designs in <a href=\"http:\/\/arxiv.org\/abs\/1010.3654\">1010.3654<\/a> (achieving a sweet oracle speedup in the process). Teaming up with them I expected maybe to reach 5-designs, but in the end we were able to get arbitrary $latex k$-designs on $latex n$ qubits with circuits of length $latex {rm poly}(n,k)$.<\/p>\n<blockquote><p><strong>1208.0692<\/strong> <a href=\"http:\/\/arxiv.org\/abs\/1208.0692\">Local random quantum circuits are approximate polynomial-designs<\/a><br \/>\nFernando G. S. L. Brand&atilde;o, Aram W. Harrow, Michal Horodecki<\/p><\/blockquote>\n<p>Finally, one more paper came out of my classical CS dabbling. Together with classical (but quantum curious) theorists Alexandra Kolla and <a href=\"http:\/\/users.cms.caltech.edu\/~schulman\/\">Leonard Schulman<\/a>, we found a cute combinatorial result in our failed bid to refute the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Unique_games_conjecture\">unique games conjecture<\/a> on the hypercube. Our result concerns what are called <a href=\"https:\/\/en.wikipedia.org\/wiki\/Maximal_function\">maximal functions<\/a>. Hardy and Littlewood introduced these <a href=\"http:\/\/link.springer.com\/article\/10.1007%2FBF02547518\">by talking about cricket<\/a>; here is a contemporary version. Imagine that you are a Red Sox fan whose happiness at any given time depends on what fraction of the last $latex n$ games the Red Sox have won against the Yankees. Fortunately, you are willing to choose $latex n$ differently from day to day in order to maximize your happiness. For example, if the Red Sox won the last game, you take $latex n=1$ and your happiness is 100%. If they won 3 out of the last 5 games, you could take $latex n=5$ and obtain happiness 60%. The maximal operator takes the win-loss series and transforms it into the happiness function. (A similar principle is at work when the loser of rock-paper-scissors proposes best 3-out-of-5, and then best 4-out-of-7, etc.) In our paper, we bound how much the maximal operator can increase the 2-norm of a function when it acts on functions on the hypercube, and the maximization is taken not over intervals, but over Hamming spheres in the hypercube. Along the way, we prove some bounds on <a href=\"http:\/\/mathworld.wolfram.com\/KrawtchoukPolynomial.html\">Krawtchouk polynomials<\/a> that seem so simple and useful I feel we <em>must<\/em> have overlooked some existing paper that already proves them.<\/p>\n<blockquote><p>\n<b>1209.4148:<\/b> <a href=\"http:\/\/arxiv.org\/abs\/1209.4148\">Dimension-free L2 maximal inequality for spherical means in the hypercube<\/a><br \/>\nAram W. Harrow, Alexandra Kolla, Leonard J. Schulman\n<\/p><\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>Having recently rediscovered arxiv.org\/submit, I thought I&#8217;d mention a few papers to come out of team UW. Two particularly exciting single-author papers are from students here. Kamil Michnicki has developed a 3-d stabilizer code on $latex n$ qubits with an energy barrier that scales as $latex n^{2\/9}$. By contrast, such a result is impossible in &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/dabacon.org\/pontiff\/2012\/09\/19\/lucky-13-paper-dance\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Lucky 13 paper dance!&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[65,76],"tags":[],"class_list":["post-6496","post","type-post","status-publish","format-standard","hentry","category-quantum-computing","category-self-meet-center-center-meet-self"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/6496","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=6496"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/6496\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=6496"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=6496"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=6496"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}