QIP 2012 Day 2

Quantum Information Enters the Third Dimension

Itai Arad, Zeph Landau and Umesh Vazirani:
An improved area law for 1D frustration-free systems.

Some motivation: which quantum many-body systems can be simulated efficiently on a classical computer? Which states even have an efficient classical description (even if we might have trouble finding that description)? Can we efficiently describe their entanglement? These questions are important both theoretically and practically.

In this talk, we’ll address this question specifically focused on the case of ground states of gapped local Hamiltonians.
We care about this special case for reasons related to both physics and computer science. Local Hamiltonians are ubiquitous in condensed matter physics, and are the natural quantum version of constraint satisfaction problems in complexity theory.
For a state to be “efficient” in a meaningful sense, we should not just have a poly-sized classical description of the state. (For example, the local Hamiltonian itself implicitly gives such a description!) We should be able to efficiently calculate the expectation values of local observables.
For an arbitrary pure state, the Schmidt rank ($latex mathrm{SR}$) or the von Neumann entropy ($latex S$) can be as large as the total number of spins in one half of a bipartition. In condensed matter systems we expect that these measures of entanglement don’t have a volume scaling, but rather, a scaling proportional to the number of spins on the boundary of the region defining the bipartition. This is what is meant by an area law.
Why might you expect that an area law holds? One motivation is the exponential clustering theorem (Hastings & Koma `06, Nachtergaele & Sims `06) which says that gapped systems have connected correlation functions which decay exponentially. Namely, if $latex |Omegarangle$ is a ground state of a gapped system, then for local observables with disjoint support we have

$latex | langle ABrangle -langle Arangle langle B rangle | le c(A,B) expbigl(-mathrm{dist}(A,B), O(Delta E) bigr)$

Here the constants depend on the strength of interactions and the norms of the local observables.
But correlation is not the same as entanglement! For example, there
are data hiding states (Hayden et al. `04, Hastings `07). Bouding
the entanglement length is much harder than simply bounding
correlation because, in a sense, entanglement can “sum up” a bunch of weak correlations and possibly add up to something large.
The area law has only been proven rigorously for 1d systems (Hastings `07) and it is open in other settings. For a 1D system, the area law means that the entropy of any contiguous block is bounded from above by a constant. Some implications: the ground state has an efficient description in terms of an MPS and it explains why DMRG works so well in practice.
What about dimensions higher than one? The constants in Hastings’ proof are exponential in $latex X = log(d)/Delta E$. (Here d is local site dimension.) The best lower bounds (Hastings & Gottesman `09, Irani `09) are only polynomial: $latex S ge Delta E^{-1/4}$. Improving these constants in the 1d proof might be able to give nontrivial bounds in 2d by course graining one of the dimensions, so progress on these constants could have highly nontrivial implications.
The new result, an upper bound which has only polynomial dependence on $latex X$. The new bound is, up to polylog factors, $latex S le X^3$.
The proof techniques use Chebyshev polynomials an they might be useful elsewhere. They assume the Hamiltonian is frustration free, but this probably can be extended to the more general case of frustrated gapped local Hamiltonians.
Outline of the proof
The general strategy: start with a product state and approach the ground state without generating too much entanglement. For example, let’s assume that the projection to the ground state doesn’t create too much entanglement, as quantified by the Schmidt rank: $latex mathrm{SR}(Pi |phirangle) le D, mathrm{SR}(|phirangle)$ for some $latex D$. Then we could get a bound of $latex S le log D$ in terms of the von Neumann entropy.
We will have to use an approximate ground state projector (AGSP). It needs to have three properties (here $latex |Omegarangle$ is the unique ground state):

  1. $latex K | Omega rangle = |Omegarangle$
  2. $latex | K |Omega^perprangle|^2 le Delta | |Omega^perprangle|^2$
  3. $latex mathrm{SR}(K|phirangle) le D, mathrm{SR}(|phirangle)$

Three factors determine the bound on $latex S$: $latex D$, which measures how much entanglement we create; the shrinking factor $latex Delta$, which determines how fast we approach the ground state with each successive application of $latex K$; and $latex mu$, the overlap between the initial product state and the ground state.
Lemma: $latex S(Omega) le O(1) dfrac{log mu}{log Delta} log D .$
When $latex D Delta < frac12$, then there is a product state with big overlap with the ground state ($latex mu$ isn’t too small). This is the bootstrap lemma. As a corollary, then for any AGSP we have an area law.
Now to prove the area law, we need to actually find a good AGSP. If it is frustration free, we can consider local projectors w.l.o.g. Then we can use the detectability lemma to get the AGSP.
The idea of the detectability lemma is to split the projectors into two layers, the product projector onto the even and odd terms in the Hamiltonian, respectively. Let $latex A = Pi_ePi_o$. Then we have

$latex |A |Omega^perprangle|^2 le (1+Delta E/2)^{-2/3} | |Omega^perprangle|^2.$

Therefore, A is an AGSP and the Schmidt rank increases by at most $latex D = d^2$ and shrinkage factor satisfies $latex Delta = (1+Delta E/2)^{-2/3}$. But we need a better AGSP to get an improvement to the area law.
Main idea: “dilute” A a bit so that we don’t let the entanglement accumulate too much. Can we dilute A so that the Schmidt rank decreases, but without worsening the shrinking factor by too much?
We define a new operator $latex mathbb{N} = sum_i Q_i$. It counts the number of constraint violations in a particular layer. It is a polynomial of degree m (the number of terms in the layer). Can we approximate it with a lower degree polynomial? Yes, and this is where the Chebyshev polynomials enter. Taking the Chebyshev polynomial of degree $latex sqrt{m}$ gives a good approximation, but with a much lower degree. But now the number of terms is large.
We have to bound the number of terms in the sum. The bound is $latex t = expbigl(O(log^2(q sqrt{m}) ellbigr)$. The overall Schmidt rank is at most $latex D^ell$.
Now we have to glue everything together. We still have to initially course grain the system a bit to deal with the case when $latex Delta D > 1$. (Recall, we need this product to be less than one half in order to get the first lemma to work).
Open questions: efficient representation for ground states of gapped local Hamiltonians in higher dimensions? Area law for higher dimensions? Can the proof be generalized to the frustrated case? Can this idea of “dilution” be used elsewhere?

Spyridon Michalakis and Justyna Pytel:
Stability of Frustration-Free Hamiltonians

I already blogged about this work (twice!) so I’ll just refer to that.

Toby Cubitt, Martin Schwarz, Frank Verstraete, Or Sattath and Itai Arad:
Three Proofs of a Constructive Commuting Quantum Lovasz Local Lemma

The Lovász Local Lemma (LLL) says that if several events are “mostly independent”, then the probability that none of these events occurs is still strictly positive. The constructive version of the LLL, whose most recent instantiation is due to Moser and Moser & Tardos, shows that the simplest algorithm you can imagine converges to a solution: just start with a random assignment and then check random constraints which are violated, fix them with a random assignment, and repeat until you reach a satisfying assignment. The quantum version of the LLL replaces classical “events” with projectors, but the current version is nonconstructive. Can we find a constructive proof in the quantum case?
There are two challenges: entanglement and the measurement-disturbance tradeoff. This talk will focus on the challenge of entanglement, since this challenge is present even in the presence of commuting Hamiltonians.
In the simplest generalization of Moser & Tardos (probabilistic version) to the quantum case, we just replace local qubits with the maximally mixed state. This is the Verstraete et al. `09 dissipative map, so we know it converges to the correct state, but we can’t (yet) bound the convergence rate.
Key lemma: We can bound the probability of generating a particular sampling sequence, i.e. a particular directed acyclic graph (DAG). Next, we can bound the expected number of violations, then use this as a subroutine in an efficient algorithm. This allows us to bound the runtime.
Second approach: use the original entropy compression argument of Moser. This goes through without too much effort. The algorithm halts in expected linear time.
What about the noncommutative case? Now there are two complementary problems: in the first proof, we can no longer bound the runtime. In the info-theoretic second proof, there is no guarantee that it halts on the correct state (though it definitely will halt). There are two conjectures which, if true, would enable a noncommutative constructive QLLL. See the paper for details.
Toby, you went way too fast for me to keep up, sorry.

Norbert Schuch:
Complexity of commuting Hamiltonians on a square lattice of qubits

I also blogged about this one previously.

Josh Cadney, Noah Linden and Andreas Winter (contributed talk):
Infinitely many constrained inequalities for the von Neumann entropy

The speaker addresses a central question which affects all of us: Why is this talk better than lunch? Answer: it’s different; it’s introductory.
The von Neumann entropy satisfies many inequalities, for example, positivity and the strong subadditivity inequality. In this talk, we are interested only in inequalities based on reduced states that are dimension independent.
Define the entropy vector: for any subset of systems $latex alpha$, we define reduced state on $latex alpha$, and the associated entropy. Stack all of these entropies into one big vector. If we fix the number of parties to be n, then we let $latex Sigma_n$ be the set of all entropy vectors of n-partite states. We don’t make restrictions on the local Hilbert space dimension. We want to characterize this entropy vector (as a function of n). Since the von Neumann entropy tells us about bipartite entanglement, than this might tell us something about multipartite entanglement.
If we look at all $latex Sigma_n$, then we get a convex cone once we take a closure. Call this the entropy cone. What sort of inequalities do we know which characterize this entropy cone?

  • For one party, we know positivity.
  • For two parties, we know subadditivity and triangle inequality.
  • For three parties, we know strong subadditivity and weak monotonicity.

Do these existing inequalities tell us everything we need to know about the entropy cone? To check this, we look at the extremal rays and try to find an entropy vector there. For n=2, we find that there are no more inequalities: the above list is exhaustive. For three parties, we find four families of rays, and again there are no more inequalities.
For four parties, we can continue this program. The extremal rays are stabilizer states! Are there any more four party inequalities? We don’t yet know. To help address the question, we first look at the classical case (Shannon entropy). In the classical case we additionally have (non-weak) monotonicity. Playing the same game in the classical system, we learn from previous work that the four-party cone isn’t polyhedral: there are an infinite number of entropy inequalities.

New result: given a constraint on the mutual information $latex I(A:C|B) = I(B:C|A) = 0$, then there are new inequalities, and additional inequalities each time we add a new party. See the paper for details. The proof elaborates on a classical proof by Makarychev et al. from `02. Each of the inequalities is independent: every new inequality sharpens the bound on the entropy cone in a different way than the others. They have not been able to violate a single one of the new classical inequalities numerically.

Jeongwan Haah:
Local stabilizer codes in three dimensions without string logical operators

The general problem is to find a noise-free subspace/subsystem of a physical system on which manipulations are still possible. For a many-body system, local indistinguishability (aka topological order) is stable at zero temperature, but might be fragile at nonzero temperature. Here we are focusing on the case of memory only (so, only in the ground state).

First a review of the toric code. Because the logical operators are string-like, the toric code is not stable at nonzero temperature. This talk will try to build a code which doesn’t have this drawback.

The code is defined on a cubic lattice with 2 qubits on each vertex. There are no string-like logical operators in this model. This system has an exotic ground state degeneracy which depends on the system size.

Some general considerations:

  • Simple cubic lattice with m qubits per site
  • t types of cubic interactions with code space dimension > 1
  • No single-site logical operators
  • No obvious string operator

To simplify things:

  • t=2: one for X and one for Z type Paulis
  • Product of all terms in the Hamiltonian should be Id
  • Logical operator on a site should be Id
  • Logical operator on a straight line should be Id

These conditions can be translated to linear constraint equations on the symplectic vector space of the Abelianized Pauli group. There are 64 commutation relations on a single unit cell of a cubic lattice which determine everything and there are 27 independent linear constraints (from the above considerations). There are 37 free variables, and 30 remaining equations. If you brute force solve everything, then you find there are 17 solutions.
The first code has a Z<–>X symmetry and looks like this:
Some questions: Is this model topologically ordered? What do you mean by a “string”?
To answer the first condition, use the local topological order condition from Bravyi, Hastings, Michalakis. That is, observables with local support should not be distinguishable in the ground state. One can check this condition and it is satisfied. The computation simplifies if you take advantage of the symmetries.
Let’s get a rigorous definition of “string”. This is a key contribution of this work. A string is a finite Pauli operator which creates excitations at at most two locations. Define an “anchor” to be an envelope around the excitations. The “width” is the size of the anchors, the “length” is the distance between the anchors. There are no geometric restrictions. We need to tweak this definition a bit: we need a notion of a “trivial” string segment to rule out some annoying counterexamples.
The no-strings rule: String segments of width w and length L > c w are always trivial (where c is some constant). The above code satisfies the no-strings rule with c=15.
One can prove this using the same techniques developed above to check the local TQO condition. Jeongwan calls this the eraser technique.
To understand what the no-strings rule means, you have to develop some algebraic tools. Physically, it means you can’t drag defects; you can only annihilate them and then recreate them elsewhere. Jeongwan spent some effort describing these tools, but it was way too much for me to write all of it down (well, at least if I wanted to understand any of it). You’ll have to look at his papers for details.
Jeongwan specifically showed how both the local TQO and no-strings rule could be cast in the language of algebras of polynomials to give simple proofs for the case of his codes.

Andrew Landahl, Jonas Anderson and Patrick Rice:
Fault-tolerant quantum computing with color codes

Haah’s nice code makes me want to think more about 3d
But my 3d glasses here (puts on 3d color glasses) can make 2d color codes look 3 dimensional (see top of post.)
Consider color codes. One of the codes (the square-octagon lattice code, aka the 4,8,8 lattice) has a transversal S gate.
The control model: there is a faulty gate basis consisting of X, Z, H, S, $latex S^dagger$, CNOT, measurements on X and Z, and state preparations on 0, + and $latex pi/4$. Additional assumptions: parallel ops, refreshable ancillas, fast classical computation, equal-time gates.
Locality assumptions: 2D layout, local quantum processing.
We also consider two error channels: bit flips and double Pauli (DP).
Consider three models of error:
Circuit based noise model:

  • each preparation and one-qubit gate followed by BP(p)
  • Each CNOT gate followed by DP(p)
  • Each measurement preceded by BP(p) and result flipped with prob p

Phenomenological noise model:

  • Same, except each syndrome bit extraction circuit modeled as a measurement that fails with prob p. This ignores noise propagation between data and ancilla.

One other noise model is the code capacity noise model. See paper for details. 🙂
Decoders and thresholds: Can use an MLE decoder or an optimal decoder (which might be slow). The threshold seems to be around 10.9% for the optimal decoder, and not much worse for the MLE decoder: Some numerical results (depending on model) are 10.56%, 3.05%, $latex 8times10^{-4}$%

(An aside.) Random-bond Ising model: Consider an Ising model with random signs on the two-body couplings. Every CSS code has an associated random-bond Ising model. The Nishimori conjecture (which is false!) said that you can’t increase the order by increasing the temperature.
Details on syndrome extraction: One can use sequential schedules or interleaved schedules. We need to make sure that the code checks propagate to themselves, and syndrome checks propagate to themselves multiplied by a stabilizer element. They found an interleaving schedule that works.
Decoding: Use the code-capacity MLE decoder (works for all CSS codes). They formulate it as an integer program over GF(2), which can be rewritten as an integer program over the reals. (This isn’t efficient (NP-complete in general).)
Showed some pictures of the thresholds.
Turns out the interleaved schedule doesn’t have much impact… it’s about the same threshold (within error) as the sequential schedule.
Also showed some rigorous lower bounds which come from a self-avoiding walk.
Architectures and computation: can imagine a pancake architecture which does gate transversally between layers. How does this change the threshold?
Good point: Injection of magic states into codes hasn’t really been studied in the case where the injection is error-prone. This would be a good topic for future work.

Guillaume Duclos-Cianci, Héctor Bombin and David Poulin:
Equivalence of Topological Codes and Fast Decoding Algorithms

Note that I blogged about this previously.
Two gapped systems are in the same phase if there are connected by a local unitary transformation, meaning there is constant-depth quantum circuit (CDQC) which transforms one into the other.[Wen et al., 2010, Schuch et al. 2010] What makes two codes equivalent? Under what conditions does a CDQC exist?
Consider two copies of the Kitaev toric code (KTC). Does this create a new code? No… However, you could imagine applying a CDQC to the pair of codes which masks this product code. Thinking in reverse, you could ask which codes have such a character, namely, that they are loosely coupled versions of multiple copies of the KTC.
The main result is (roughly) that every 2D stabilizer code on qubits is equivalent (modulo CDQC) to several copies of the KTC, and moreover the mapping can be constructed explicitly.
Some useful concepts. Topological charges are equivalence classes of excitations which can be fused within a fixed bounded region back into the vacuum configuration. Excitation configurations are collections of defects inside a region. String operators move defects around.
The result applies in the following setting. They consider 2D lattices of qubits with geometrically local, translationally invariant stabilizer generators. To have a topological-like condition, they ask that the distance of the code scales with the linear size of the lattice ($latex d sim L$).
Gave a sketch of the proof… too detailed to blog in real time. See the paper for details.
Application to quantum error correction (QEC): The decoding problem is the same as trying to infer the worldline homology class from the measured particle configuration. You can decode, say, the 4,8,8 color code by deducing a mapping of the charges, then deducing an effective noise model on the KTCs, and then applying your favorite decoding algorithm for each KTC.
They did some numerics and found 8.7% fault-tolerance threshold for the 4,8,8 code using the RG decoder.

Joseph M. Renes, Frederic Dupuis and Renato Renner (contributed talk):
Quantum Polar Coding

Joe started off by disappointing the audience with an announcement that his all-ETHZ team had written a paper without any smooth entropies.
Instead, he talked about quantum polar codes, which are explicit, channel-adapted block quotes (CSS codes, in fact), with linear-time encoding and decoding, and achieve the symmetric (see below) coherent information for Pauli channels. They appear to need entanglement assistance, but maybe not? Details to follow.
The key idea is channel polarization. If we have two independent bits $latex X_1, X_2$ going into two bit channels with outputs $latex Y_1, Y_2$, then we can do a change of basis on the input to $latex U_1 = X_1oplus X_2, U_2 = X_2$. We think of this as a channel from $latex U_1$ to $latex Y_1Y_2$, together with a channel from $latex U_2$ to $latex Y_1Y_2$ that uses our imperfect knowledge of $latex U_1$ as side information. The first channel has less mutual information than our original channel, and the second channel has more. Hence we have “polarized” the channels, essentially taking the mutual information from one use and giving it to a different one.
The basic relation is:
$latex I(U_1 ; Y_1 Y_2) + I(U_2 ; U_1Y_1Y_2) = 2 I(W)$
where I(W) refers to the mutual information of the original channel.
If we keep doing this recursively, we can continue until we get a bunch of nearly-perfect (good) channels and a bunch of nearly-worthless (bad) channels. Remarkably, the fraction of good channels is close to the input capacity given uniform inputs (which we call the “symmetric capacity”). To encode, we send messages over the good channels, and freeze inputs to bad channels (both parties know which ones are which). This requires only $latex nlog n$ CNOT gates. You can decode sequentially using ML; the recursive structure of the code makes ML efficient.
The quantum version just uses the fact that the transformation is good in the conjugate basis as well! You just turn the circuit “upside-down” (CNOTs reverse the target and control). For a Pauli channel, the polarization occurs in both amplitude AND phase, albeit in different directions. We only need to analyze amplitude and phase in order to verify that a channel can send general quantum states (see Christandl & Winter 2005 for the precise statement).
Some channels are going to be good for both amplitude and phase, so we use those to send entanglement. For the ones which are bad on amplitude, we freeze the amplitude, and similarly, for the ones which are bad on phase, we freeze the phase. We can use preshared entanglement for the channels which are bad on both amplitude and phase.
To decode, just successively and coherently decode both the channels. Reuse the classical decoders. Decode first amplitude, then phase conditional on the given amplitude. You can achieve the (symmetric) coherent information as the net rate.
From the numerical simulations, (see picture) it seems that this entanglement assistance is never actually needed!
In fact, for channels with low enough error rates they can show this rigorously. The proof techniques are similar to the original polar code proof techniques and use channel martingales.
One open question is to resolve whether or not the entanglement assistance is really necessary. Also, can they extend this to other channels? Yes! It works (joint work in progress with M. Wilde.) Plus, you can still do sequential decoding.

Sergey Bravyi and Robert Koenig:
Disorder-assisted error correction in Majorana chains

Consider, if you haven’t already, the toric code Hamiltonian $latex H_0$ given by minus the sum of toric code stabilizer generators. Now perturb it lightly by adding local Z fields: $latex V= -msum_i Z_i$. What does the storage fidelity look like (at zero temperature) as a function of time? That is, what happens if we encode an adversarially chosen pair of qubits, wait a time t, and then try to recover the qubits? Let Tstorage denote the longest time we can wait and still get 99% fidelity.
The bad news: $latex T_{rm storage} = O(log N)$.
Crazy idea: Add randomness!
The idea is that the perturbation creates particles, which then hop around, creating strings that quickly lead to logical errors. But adding a little disorder can slow them down via the phenomenon of Anderson localization! In fact, the eigenfunctions can be exponentially localized. This is a nice example of a phenomenon in which something that sounds like a negative result—disorder making quantum particles move even more slowly than diffusion—turns into a postiive result, when we can use it to limit our old enemy, Eve.
Challenges: need it to work for many particles, and in 2-D. This is intractable to simulate classically.
So… instead we consider a 1D model that we can analyze, viz. unpaired Majorana modes in quantum wires (due, of course, to Kitaev).
The Hamiltonian is a sum over three types of terms: a chemical potential ($latex a_j^dagger a_j$), a hopping term ($latex a_j^dagger a_{j+1} + h.c.$), and a Cooper pair term ($latex Delta^*a_i^dagger a_j^dagger + Delta a_ia_j$). The base Hamiltonian has a 2-fold degenerate ground state (thus protecting a qubit). With respect to Majorana operators $latex c_j$, the Hamiltonian becomes the super simple:
$latex frac{J}{2} sum_{j=1}^{N-1} i c_{2j} c_{2j+1}$
The perturbation is $latex V = sum_{j=1}^{N-1} i mu c_{2j-1} c_{2j}$.
The resulting error-correction procedure resembles the minimum-weight matching procedure for the toric code, but is much simpler because it works on a line.
This code resembles the lowly repetition code, but actually has nontrivial distance.
Now what happens if the perturbation has randomly-varying weights. That is, $latex V = sum_{j=1}^{N-1} i mu_j c_{2j-1} c_{2j}$ for random $latex mu_j$.
The expected storage time results can be summarized in the following table.

no disorder disorder implications
weak or no perturbations, with small N $latex exp(Omega(N))$ $latex exp(Omega(N))$ not so nice…
strong perturbations, with small N $latex O(log N)$ $latex Omega(N)$ nice…
large N $latex O(log N)$ $latex exp(Omega(N))$ very nice!

Proofs are a mix of rigorous proofs and simulation.
Dynamical localization resembles that obtained for the 1-paricle Anderson model.
$latex c_q(t) = sum_p R_{p,q}(t) c_p$ with $latex |R_{p,q}(t)| leq e^{-|p-q|/t}$.
That’s only for one particle, though. They actually need to prove a bound on the expected determinant of submatrices of $latex R$. Using this, they can show the fidelity remains high for an exponentially long time. This is interesting not only because of its implications for quantum memory, but because understanding multi-particle localization is a tough problem in general.
The proof uses quasi-adiabatic continuation for free fermions to argue that a superposition in the ground space is mostly preserved. Or more precisely, that the only damage occuring to it consists of correctable errors. The techniques look important and interesting. Properties such as gap stability that they prove might of interest elsewhere.
But they don’t solve everything: in some cases, they need numerical simulation. They use a number of tricks to tame the exponentials that would come from naive algorithms; in particular, taking advantage of the fact that they are working with fermionic Gaussian states (building on, and improving, an old result of Terhal and DiVincenzo).

Open Science and the Quantumista Condensate

A rare event occurred today here in Seattle. And I’m not just talking about the 20 minutes of partly sunny skies we got at lunchtime. No, this was something rarer still: a quantumista condensate.
What precipitated this joint gathering of the decohered and the coherent? Michael Nielsen was visiting the University of Washington to deliver a distinguished lecture on the topic of his new book: open science. Having seen Michael talk about this subject 3 years ago at QIP Santa Fe, I can say that he has significantly focused his ideas and his message. He makes a very compelling case for open science and in particular open data. He has thought very hard about what makes online collaborative science projects successful at focusing and amplifying our collective intelligence, why such projects sometimes fail, and which steps we need to take to get to the promised land from where we are currently.
The talk was recorded, and as soon as the video becomes available I’ll put a link here. I highly recommend watching it.
Update (12/12): Here is the link to Michael’s talk.
You might be wondering, what is the optimizer doing there? He is in town to give the colloquium to the computer science department. And given all the excitement, Dave Bacon, aka Pontiff++, couldn’t help but sneak over from Google to check things out. He is the one you can blame for coining the horrible phrase “quantumista condensate”, but you probably already guessed that.

What are the odds?

Let’s multiply together a bunch of numbers which are less than one and see how small they get!
If that sounds like fun, then you’ll love this sleek new infographic (of which the above is just the teaser) posted this morning at BoingBoing. The graphic is based on this blog post by Dr. Ali Binazir, who apparently has an AB (same as a BA) from Harvard, an MD from the UC San Diego School of Medicine, and an M.Phil. from Cambridge.
I’ll save you the effort of clicking through: the good doctor estimates the probability of “your existing as you, today”. His estimate consists of (what else?) multiplying a bunch of raw probability estimates together without conditioning! And I’ll give you a hint as to the conclusion: the odds that you exist are basically zero! Astounding.
I should add that it seems he was forced to add a disclaimer that “It’s all an exercise to get you thinking…” and (obliquely) admit that the calculation is bogus at the end of the post, however.
Is there any branch of mathematics which is abused so extravagantly as probability? I think these sorts of abuses are beyond even the most egregious statistical claims, no?

Quantum Information and Foundations at the APS March Meeting 2012

After John Preskill’s call for more quantum participation at the APS March Meeting, I couldn’t say no to a request for a blurb about quantum info and foundations! The following is a guest post by Giulio Chiribella.
Following up the previous post by John Preskill, I’d like draw your attention to the focus session “Quantum Information for Quantum Foundations” which will take place at the APS March Meeting 2012.
If you are interested in the conceptual and fundamental aspects of Quantum Information, this is the right session for you to come and present your work! The event promises to be lively and stimulating, and will be a great occasion to advertise your recent results.
On top of that, your participation will give an important support to foundational research.The foundational space at the March meeting is a great opportunity for our community.  But it is vital to keep this space alive, responding with a visible participation and presenting talks on the best of the foundational research in Quantum Information. This is not hard to do: Over the past few years there has been an enormous amount of progresses and a burst of new exciting results at the interface between Quantum Information and Foundations.  Also, we should not forget of the numerous results in Quantum Information that, even without being explicitly foundational, continue to shed a bright light on the operational features of Quantum Theory.   It is enough to have such a vibrant scientific landscape represented next March in Boston to make the foundational session memorable!
This year’s session will start with an invited talk by Valerio Scarani, who will summarize the main ideas and the latest developments on Information Causality. Valerio’s talk will be followed by a lineup of contributed talks, which hopefully would be as many and as lively as the talks of last year’s edition, organized by Chris Fuchs, which has been a very successful event.
To participate to the session, you can submit your abstract at the webpage http://www.aps.org/meetings/abstract/index.cfm. (don’t forget that the deadline for submissions is this Friday November 11th 2011!)
A last remark before concluding:  Chatting with colleagues sometimes I noticed that potential speakers are discouraged by the 12 minutes format, which seems too short to present all the relevant details. We should remind, however, that presenting details is not really the point here: The APS Meetings are huge events where the whole physics community meets to highlight advancements and to advertise new ideas across fields, not to address the specialists of one particular field.  The format of the March Meeting talks is designed to rapidly advertise new results, and if you discover that you would like to know more about one particular result…  well, during the meeting there is a lot of free time where you can interact directly (and more efficiently)  with the speaker about the details of her/his work.
So, let us take the event in the right spirit and make the foundational space at the March Meeting a real exciting forum for the exchange of new ideas!
Hope to see many of you in Boston!

GQI needs you! — John Preskill guest blogs

The following post was written by John Preskill.
I’m writing this guest post because I want all you quantum informationists to come to the American Physical Society 2012 March Meeting in Boston next February 27 through March 2. I’m telling you now, because the deadline for submitting your contributed abstract is next Friday. That’s 11/11/11, which is easy to remember. And lucky.
Why come?
Okay, maybe the March Meeting is not for everyone. Yes, last year there were over 8000 physicists at the meeting in Dallas, including over 3200 students. But that’s not everyone. No, not quite.
And those of us who came might not treasure every memory. Standing dutifully in line for what seems like hours, yet never quite reaching that much needed cup of coffee or sandwich. Missing at least 44 of 45 parallel sessions, while wondering what’s the harm in missing all 45? Surrendering to drowsiness after too many 10-minute talks in a row. Wondering who all these people are.
No, it’s not perfect. But the March Meeting is an exhilarating experience, and you really don’t want to miss it. Though you might want to consider bringing your own sandwich.
Quantum information has been well represented at the March Meeting since 2005, thanks to the efforts of the APS Topical Group on Quantum Information (GQI). Each year the membership of GQI has swelled and our participation in the March Meeting has ramped up, which allows GQI to have an even larger presence in the next meeting. Last year 311 GQI members attended and there were 324 contributed talks in the quantum information sessions. I hope we will beat those numbers substantially in 2012.
The March Meeting  provides a valuable opportunity for quantum information enthusiasts to exchange ideas with the broader physics community, and to convey the excitement of our field. Naturally, since the March Meeting is dominated by the condensed matter physicists, the interface of quantum information with condensed matter is especially emphasized, but contributions in all areas of quantum information science are welcome.
The 2012 program will be especially exciting. GQI will sponsor or co-sponsor six sessions of invited talks covering important recent developments:  topological quantum computing with Majorana fermions, quantum entanglement in many-body systems, quantum simulations, quantum computing with superconducting circuits, quantum information processing in diamond, and silicon spin qubits. In addition, there will be an invited session about career opportunities in undergraduate teaching for quantum information scientists.
If our turnout is below expectations, the GQI Program Chair gets the blame. And that’s me. So … don’t make me look bad — come to Boston and contribute a talk!
To register and submit an abstract go to:
http://www.aps.org/meetings/march/
You should flag your abstract with an appropriate quantum information Focus Topic or Sorting Category from the list available at the website, to ensure that your talk is scheduled in the proper session.
And if you are not a member of GQI, please join. Increasing our numbers means more visibility and influence for GQI within APS, and that too is good for the cause of quantum information science.
See you in Boston!

QEC '11 Registration Deadline Extension

The registration and submission deadline for QEC ’11 has been extended by a week and is now open until November 7th.
You can register here: http://qserver.usc.edu/qec11/reg.html
It looks like a great conference… I wonder if the Ghost Pontiff will show up to give his invited talk?

Dial M for Matter

It was just recently announced that Institute for Quantum Information at Caltech will be adding an extra letter to its name. The former IQI will now be the Institute for Quantum Information and Matter, or IQIM. But it isn’t the name change that is of real significance, but rather the $12.6 million of funding over the next five years that comes with it!
In fact, the IQIM is an NSF funded Physics Frontier Center, which means the competition was stiff, to say the least. New PFCs are only funded by the NSF every three years, and are chosen based on “their potential for transformational advances in the most promising research areas at the intellectual frontiers of physics.”
In practice, the new center means that the Caltech quantum info effort will continue to grow and, importantly, it will better integrate and expand on experimental efforts there. It looks like an exciting new chapter for quantum information science at Caltech, even if the new name is harder to pronounce. Anyone who wants to become a part of it should check out the open postdoc positions that are now available at the IQIM.

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 4

The fourth and final day of the conference. Update Oct 26: The slides from the talk will be posted online here.

Quantum Information in Quantum Many-Body Physics

Roger Melko, Entanglement entropy in exotic phases of quantum matter.

Review of quantum Monte Carlo (QMC). Sign problem. Want to compute Rényi entanglement entropies for frustrated spin systems. He states that the record for Lanczos diagonalization for just the ground state is… 48 spins. (Wow! Presumably this is exploiting some specific symmetries, but still, that’s impressive.) Worried about “weak” critical points where the correlation length is merely larger than the maximum simulable system size, but doesn’t actually diverge in the thermodynamic limit. Works on trying to distinguish these possibilities. In systems with long-range entanglement or topological order, the entanglement scaling can give a picture “Landau-like” order parameter. Can also give a picture of “non-Landau” (deconfined) quantum critical points. Subleading order correction to the area law contains universal information. We will compute Rényi entanglement entropy. Introduce the valence bond basis for QMC sampling (Sandvik PRL 2005). In particular, use projector QMC. The estimator that they use for estimating expectation values is weighted by the action of a projector on a trial (valence bond) state. In this way, many quantities can be estimated such as energies, order parameters, real-space correlation functions, etc. In order to measure entanglement, we can use the swap test. We can use these techniques to study many models: Heisenberg (Néel ordered ground state), resonating valence bond spin liquids, J-J’ models (conventional quantum critical points), J-Q models (unconventional, deconfined quantum critical points). Showed some results for Heisenberg model on a torus. Stochastic series expansion: expand the finite-temperature partition function by expanding the exponential in its Taylor series. You get powers of $latex -beta H$, which you can sample in various ways and using various update rules. (Several buzzwords that I didn’t understand: worm algorithms, directed loop algorithms?) To access Rényi entropies, we can use the replica trick by gluing together several Riemann sheets (depending on which order of the Rényi entropy is desired). Can also compute (Rényi) mutual information. Want to study spin liquids. One possible model, kagome lattice Bose-Hubbard model supports a $latex mathbb{Z}_2$ quantum spin liquid. This measure is a loop gas similar to the toric code. When computing (Rényi) topological entanglement entropy there is a finite-size “chord length” term that doesn’t exactly cancel. Showed some numerics for this model.

Frank Verstraete, The entanglement spectrum of strongly correlated quantum many-body systems.

Began with an overview of what questions people care about with regard to entanglement spectrum. Area laws, universal terms, dimensional reduction and holography. Says the following (I’m paraphrasing except for the quotes): Since I was reluctant to prepare my talk, I’ve decided, “with a stroke of genius” to turn this talk around and ask the audience what they would like to hear about so that I don’t disappoint anyone. He discusses the notion of PEPS as dimensional reduction: we argue that we can map 2D quantum systems to 2D classical systems (the PEPS), even though this is potentially difficult (complexity-theoretic obstacles in general) and we know that 2D classical systems can be mapped to 1D quantum systems. But we know how to deal with 1D quantum systems if locality is maintained throughout this series of mappings. These methods should work for gapped systems, but will breakdown for critical systems. A long review of why we care about area laws with some examples and numerics for gapped and critical systems. We can only get area laws iff there is very little weight in the tail of the Schmidt spectrum. But you also need to show that cutting small Schmidt coefficients in one cut that you don’t truncate large Schmidt coefficients across another cut. “Because there is an isometry that maps virtual degrees of freedom to physical ones in an MPS, this proves that the virtual degrees of freedom are real. This is the main message of this talk.” (?) Symmetries present in the state are inherited in the tensor network description, for example in the AKLT spin chain. Review of cMPS. Showed some numerical calculations of the entanglement spectrum in the Lieb-Liniger model using cMPS. Even though this method is known to be critical we see fast decay of the Schmidt coefficients.

Stephan Inglis, Measuring Rényi entropies in finite-temperature QMC using Wang-Landau sampling.

Review of definition of Rényi entropy and why we care about it. Review of classical MC with Metropolis update rule. The Wang-Landau (WL) algorithm calculates the density of states instead. Assume an initial DOS, generate a new state and visit with probability $latex p$, then update the DOS by multiplying by some function $latex f$ so that $latex g_{i+1}(E) = g_i(E) times f$. The probability is $latex min(g(E_i)/g(E_f),1 )$ where $latex g(E_i)$ is the DOS at the $latex i$th time step. WL works by calculating the DOS as a function of the energy. In stochastic series expansion (SSE) quantum Monte Carlo (QMC), the energy of the system is related to the number of operators sampled, so a more natural way to parameterize things is in terms of $latex beta$ and a power series expansion. WL gives a much simpler way of computing Rényi entropies because there is no integration needed. At finite temperature, we can use mutual information instead. Showed a few examples. Now a discussion of the noise in WL. Is there a self-consistent way to check the noise? Check the first derivative of the DOS. The noise in the derivative seems to be uncorrelated, so we can just average several simulations to converge to accurate results. The advantages of WL: only one simulation is required rather than many, there is no explicit cumulative integration error and results can be improved building on old data rather easily. The difficulties are that a single simulation takes longer and it is harder to measure observables that are not related to energy.

Jutho Haegeman, Time-dependent variational principle for quantum many-body systems.

Time dependent methods for quantum lattice systems include time-evolving block decimation (Vidal). There are several disadvantages, however, such as the method breaks symmetries, the state temporarily leaves the variational manifold, the truncation is suboptimal for infinite-sized systems, and it is manifestly discrete in time. Can we implement this with cMPS? We have to use some results from Dirac. We project onto the tangent plane of the variational manifold at each point in time.  If the time-dependent evolution never leaves the manifold, then you get a non-linear differential equation within the manifold. This method inherently respects all symmetries and there is no Trotter error. It is globally optimal (spatially… it is still local in time) and there is no truncation. We will function on uniform MPS. We need to compute overlaps of tangent vectors, and this introduces a metric. We can simplify this through gauge fixing. MPS are gauge invariant in the sense that if we conjugate the bond indicies we get the same physical state. Taking tangent vectors, we get an additive correction which depends only on the choice of gauge, so we can pick the most convenient choice. Describes (pictorially) some gauge fixing conditions which make things particularly nice. Can use this for both imaginary and real time evolution. The method also provides additional checks on the convergence of the method by checking the length of the tangent vector. Showed some examples of numerics. Near a steady state solution, you can linearize the differential equation to simplify things. This leads to the linear (tangent) subspace around the stable point (Rayleigh-Ritz equations). Can study excitations with momentum $latex k$ by combining the Östlund-Rommer ansatz and the Feynman-Bijl ansatz (single-mode approximation). More numerics. Could also study domain walls with momentum $latex k$ using the Mandelstam ansatz. More numerics.

Ann Kallin, Scaling of entanglement entropy in the 2D Heisenberg ground state.

QMC in the valence bond basis for measuring the Rényi 2-entropy. Spin-1/2 antiferromagnetic Heisenberg on a square lattice with only nearest-neighbor interactions. Use the power method to get something close to the ground state. Showed some numerics… the method converges well on up to a 20 by 20 lattice. They do QMC sampling with importance weighting. The power of the Hamiltonian acting on the trial state can be written as a sum over products and this is what they sample. Then they use an update rule to switch the valence bonds to do the sampling. Computing the overlap between two valence bond states is easy, and just amounts to counting the number of loops in the resulting diagram when you superimpose both valence bond patterns on the same lattice. This is the “regular” VB QMC. There is also a “loop” version of the algorithm. Now you have two trial states and sandwich your $latex m$ different operators from the Hamiltonian. This method is similar to the SSE method discussed earlier in the conference, but with different boundary conditions. Then some numerics with 100 sites on a 1D Heisenberg chain. The loop algorithm converges quite quickly as long as the size of the region isn’t too large (as a fraction of the total system size). One further improvement is to use a ratio trick, namely, measure ratios of entropies for different region sizes. In 2D, they wanted to look at the effect of corners in the region on the entropy. Chose square-shaped regions and stripes around the torus. Plot scaling of the 2-entropy as a function of the subsystem size. They fit to a function of the form $latex a L+b log L + c$. Showed some nice pictures of “Rényibows” which simultaneously plot the entropy over several system sizes and subsystem sizes. (The name comes from the colors used in the plot.) One of the things they observed was a 1D CFT-like chord-length correction in 2D systems. Also saw some effects due to corners.

Johannes Wilms, Mutual information in the Lipkin-Meshkov-Glick model.

We will work with mutual information because it is a quantity which describes mixed states, and we will consider finite temperature. Define the LMG model: every spin interacts with every other spin ($latex XX$ coupling) together with a transverse field. This model was originally studied in the context of atomic nuclei, and more generally for shells of finite Fermi systems and metal clusters, He-3 droplets, “magnetic molecules”, also mappings to BEC and cavity QED. More recently people have loked at ground state and thermal properties. Shows the phase diagram, temperature versus magnetic field strength. We can compute many things in this model by exploiting symmetry. For example, total angular momentum is a good quantum number. The Hamiltonian decomposes into polynomial-sized blocks which have exponential multiplicity. The best way to study the problem is to use Clebsch-Gordan coefficients. Showed a plot of concurrence in the phase diagram… it doesn’t work so well. On the other hand, the mutual information perfectly tracks the phase transition. They looked at the scaling of the mutual information at criticality. They find that it scales linearly with the log of the system size with a prefactor of 1/4 in the finite temperature case (as opposed to 1/3 which is what you expect at zero temperature).

Winton Brown, Quantum Hammersley-Clifford theorem.

The classical HCT is a standard representation theorem for positive classical Markov networks. Recently, quantum Markov networks have been used in approximation methods for lower bounds on the free energy of quantum many-body systems. Defined conditional mutual information and the strong subadditivity inequality. The Markov condition in the classical case is given by $latex I(A:C|B) = 0$, which implies that $latex p_{ABC} = p_{A|B} p_B p_{C|B}$ where $latex p_{A|B} = p_{AB}/p_B$. A Markov network is a generalization of a Markov chain to an arbitrary graph $latex G$. The Markov condition just says that whenever I partition the graph into three sets of vertices $latex A, B, C$ with $latex B$ separating $latex A,C$, then $latex I(A:C|B) = 0$. The classical HCT says that any positive probability distribution factorizes over the cliques of $latex G$. The proof is simple. Take the log of the probability distribution. From the conditional independence, we find that $latex p$ is a sum of two terms that act trivially on either $latex A$ or $latex C$. … couldn’t get the rest down in time. For quantum states, we just use the quantum conditional mutual information. What quantum states can be supported in this framework? We immediately find that $latex rho = rho_{AB} rho_{BC}$ where the two factors commute. Then repeating the proof from the classical case runs into trouble because we don’t know if the various terms commute. Consider a graph which has no 3-cliques. On these graphs we have enough structure that each two-body operator has to be zero. This is almost enough, but some 1-body operators might remain. But it turns out this is not an obstacle. Unfortunately, there is no simple general result. A counterexample exists with a five-vertex graph with a single fully connected central node and a square of four nodes surrounding it. Now consider a connection to PEPS. We can construct a PEPS by applying a linear map to each site of virtual spins. If the linear map is unitary, then the PEPS built in this way are quantum Markov networks. Under what condition can we show the reverse implication? For a non-degenerate eigenstate of a quantum Markov network, the Markov properties imply an entanglement area law. Thus, an HC decomposition implies a PEPS representation of some fixed bond dimension. Open problem: show under what conditions quantum Markov networks which are pure states are have an HC decomposition. There exist non-factorizable pure state quantum Markov networks (simple example).

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 3

Day 3 of the conference.

Quantum Information in Quantum Many-Body Physics

Guifre Vidal, Criticality, impurities and real-space renormalization, joint work with Glen Evenbly.

First an overview of the multi-scale entanglement renormalization ansatz (MERA). Basic notions: isometries, disentanglers, causal cone. Bounded causal cone width means that it is possible to efficiently evaluate the expectation value of local observables. One can also course grain a local operator by applying one level of the MERA. (For ternary MERA, a two-body term gets mapped to a two-body term.) We will focus on critical systems, and then put in impurities. To enforce scale and translation invariance, we characterize the MERA by just a single pair of tensors $latex U$ and $latex W$ repeated across the whole network. MERA seems to be a good ansatz to describe critical systems where such invariance is manifest. MERA also can exhibit polynomial decay of correlations. Define the scaling superoperator. The eigenvalues and eigenvectors of this superoperator are the scaling operators and (the exponential of) the scaling dimensions of the underlying CFT. How does one choose the optimal tensors given a local Hamiltonian? He dodges the question… doesn’t want to get into such details in this talk. Showed an example for the 1D critical Ising model with transverse field. Now let’s consider an impurity at the origin. Now we lose translation invariance, but we still have scale invariance (conformal defect). The bulk scaling operators in the absence of an impurity has zero expectation. However, in the presence of an impurity the one-point functions decay like a power of the distance to the impurity. We need to add new scaling operators and new scaling dimensions. Translation invariance is broken, we can’t possibly optimize over an extensive number of tensors (because we could never reach the thermodynamic limit). Well, desperate times call for desperate measures. Introduce something called “Principle of minimal influence” in RG flow. PMI says: Given a MERA for the initial (pure) problem, we can obtain a MERA for the problem with impurity by then varying the tensors in the causal cone for the region containing the impurity. We don’t expect the principle to work in all cases, such as in systems with spontaneous symmetry breaking where the impurity restores the symmetry. We are still exploring the limits of applicability of this principle. If you split the isometries into two “half isometries”, you get that the total number of tensors you need to optimize is still $latex O(1)$. Why is this ansatz plausible? For one thing, we can show that the scaling superoperator now exhibits power low decay with respect to the distance from the impurity. The optimization goes as follows. First optimize with respect to the bulk Hamiltonian (no impurity). Next, use the bulk tensors to map the original impurity problem to an effective impurity problem. This gives a new Hamiltonian problem with exponentially decaying interaction strengths. Finally, we can reduce the optimization via “folding in half” to that of an MPS for an open boundary problem. This same form appears in Wilson’s solution of the Kondo problem. However, the impurity MERA can be used beyond the setting of free fermions, unlike Wilson’s original approach. Showed example of transverse-field Ising model with $latex XX$ impurity. This model allows an exact solution via CFT, and the numerics agree with the exact solution for various impurity strengths. We can extend this to more than one impurity by treating each impurity individually in their non-intersecting causal cones, and then once the causal cones intersect we do one more variation. We can also have an impurity that “cuts” the chain or a boundary between two half-chains coupled by the impurity, and even boundary conditions.

André-Marie Tremblay, Success and limitations of dynamical mean-field theory, joint work with Sordi, Sénéchal, Haule, Okamoto, Kyong and Civelli.

First recall how one “builds” a metal. We take a lattice and put orbitals on the sites. If the orbitals overlap, then it takes no energy to excite an electron. However, this doesn’t always work, e.g. in NiO. Recall the Hubbard model with on-site $latex U$ and hopping strength $latex t$ (and $latex t’$ for nearest diagonal neighbors, etc.) When $latex U=0$, we can Fourier transform and solve the model. When $latex t=0$, everything is localized, and there are $latex 2^n$ degenerate ground states at half filling corresponding to the different electron configurations with one electron per site. We can observe antiferromagnetism in the Hubbard model. We see a Mott transition in the region where $latex t$ and $latex U$ are of the same order, and at this point in the phase diagram neither the plane wave nor the localized wavefunctions gives a good description. One can write an effective model with Heisenberg coupling. We would like to measure Green’s functions for a single particle. Why do we care about Green’s functions? We can use them to compute the value of other observables and can measure it directly via photo emission. Fermi liquid theory: means that it is not too far from a free electron problem. We want to be able to solve in the presence of an impurity. Let’s review dynamical mean-field theory (DMFT). We begin working in infinite dimensions. Then the self-energy is a function only of the frequency. We want to compute the self-energy of the impurity problem. Then use that self-energy (which depends only on frequency) for the whole lattice. Now project the lattice onto a single site and adjust the bath so that the single site density of states obtained both ways are equal. There are some self-consistency equations. We can use a number of methods: cavity method, local nature of perturbation theory in infinite dimensions, expand around the atomic limit, effective medium theory, Potthoff self-energy functional. Use the universal self-energy functional method (see the talk by Sénéchal). Vary with respect to the parameters of the cluster (including Weiss fields). In this context, we can view DMFT as a stationary point in the variational space. What about the impurity solvers? The most trivial one is just exact diagonalization. Could also do Monte Carlo using an ergodic Markov chain. Showed a bunch of plots of these numerical methods for various models. The strengths of DMFT are: can obtain one-particle properties and phase diagrams; weaknesses include, order parameters are not coupled to observables that are quadratic in creation and annihilation operators, transport (four-point correlation functions), analytic continuation for QMC solvers. Future challenge is to find the optimum environment.

Matthias Troyer, Galois conjugates of topological phases, arXiv:1106.3267.

The question is: should we explore the computational power of non-unitary topological phases? Such phases appear through Galois conjugation of unitary topological phases. The arising models are non-Hermitian, but still have a purely real spectrum. Do they make sense? First, a review of topological phases and anyons (both Abelian and non-Abelian). Review of $latex mathrm{SU}(2)_k$ fusion rules. These states might appear in the Pfaffian state (Moore & Read) which has Ising anyons (level $latex k=2$) or in the parafermion state (Read & Rezayi) which has Fibonacci anyons (level $latex k=3$). How can we make more exotic states? We start with an anyonic liquid on a high-genus surface. Picture two sheets which are connected by fusing a bunch of perforations through both sheets. These create doubled (non-chiral) models. We can consider two types of quantities: flux through a hole and flux through a tube. We can build a family of basis states to describe the anyon wavefunctions by building a “skeleton” through each of the tubes. The basis states are labeled by string nets of particle labels on the skeleton lattice. Now we add the Hamiltonian for the model. We can either add flux through a tube or a hole, and each of these costs some different amount of energy. Each of these terms acts on either plaquettes or edges of the skeleton lattice. The plaquette term gives a ground state without flux through the plaquettes. This is like closing each of the holes: we get vacuum states of anyons on each of the two sheets. This is just the toric code or the Levin-Wen model. It’s a topological phase which is robust at zero-temperature to weak perturbations. This can be solved exactly, even for the excited states. Violations of the vertex constraint are chiral anyonic excitations living on the surface. Consider a finite density of anyons, and let’s pin the anyons into fixed positions. We would like to add a Heisenberg-like interaction Hamiltonian to this. We just introduce an energy difference between the various labels of the particle types. Review of the fusion basis for anyons, specialized to the Fibonacci case. Review of the F-moves ($latex 6j$ symbols, Clebsch-Gordan), the pentagon and hexagon equations. So what does a Heisenberg chain for anyons look like? $latex H = sum_i F_i Pi_i^0 F_i$ where $latex F_i$ is the F-move that exchanges two nearest-neighbors in a chain and $latex Pi_i^0$ is a projector onto a maximally entangled state. Ran numerical simulations to compute the gap… but it was gapless! The gap decreased like 1/L, so we guess it is a critical model. We can compute the central charge using the connection with entanglement entropy. It turns out that the model is exactly solvable by using the Temperley-Lieb algebra. They studied these interacting chains for various values of the level $latex k$. There are lots of equivalences known to CFTs depending on the central charge and the sign of the coupling (antiferromagnatic or ferromagnatic). Plot of the energy spectrum, with very good agreement between exact solution and numerics. When there are finite density interactions, there are gapless edge modes and a nucleated liquid. This foreshadows the situation in 2D. You get a nucleated liquid with a finite bulk gap and edge states. A novel quantum Hall liquid is nucleated inside the parent liquid. Now we move to the Galois conjugation. Review of Galois theory. Different roots of a polynomial are Galois conjugates of each other. The algebraic properties are the same, but the geometric properties are different. As an example, consider again the Fibonacci case. The elements of the F tensor element $latex F_1^{111}$ can be expressed in terms of the golden ratio $latex phi$, or in terms of $latex -1/phi$. However, this second one (Yang-Lee theory) is non-unitary. But if you look at the spectrum of the antiferromagnetic Yang-Lee model, you get a completely real spectrum! The central charge is negative (-3/5). The vacuum state is not the lowest energy state in this model. Similar conclusions for the ferromagnetic case. How does the entanglement entropy scale? It can’t still obey $latex c/3 log(L)$ since the central charge is negative, right? Let’s look at the doubled Yang-Lee model. We can redefine the inner product to get a self-adjoint Hamiltonian. (Bender et al., PT-symmetric quantum mechanics). We can solve this doubled YL model exactly, just like with the Levin-Wen models before. Now the code property that defines topological order has a “left-right” eigenvector version, since these are now distinct notions. There are several ideas one might have to turn it into a Hermitian model: Square the plaquette terms, or use just the projector onto the left or right eigenspaces. But these approaches don’t work at giving a topological phase which is robust… they are sensitive to local perturbations. “Hermitianizing” destroys the left-right code property. Might there by some local transformation to a Hermitian model? If not for Fibonacci, then maybe for some other non-unitary TQFT? Norbert: can you use a tensor network description of the ground state to make a Hermitian parent Hamiltonian? Which ground state do you choose? The left or the right? Main result: Galois conjugates of TQFTs where the braid group acts densely in $latex mathrm{SU}(2)$ cannot be realized as ground states of quasilocal Hermitian Hamiltonians. Non-Abelian topological phases offer universal quantum computation; the minimal models of CFT are Heisenberg models of non-Abelian anyons; Galois conjugation of topological phases gives intriguing non-unitary theories and non-Hermitian models; but non-unitary phases will not appear in physical systems. Tobias: can’t we add a single extra spin which couples to all the spins and make the model Hermitian?

Renato Renner, Aisenstadt Lecture, An information-theoretic view of thermalization.

Consider a very trivial system of a classical binary random variable $latex C$ (for classical) which takes the values 0 or 1 with equal probability. Now let’s introduce an observer $latex A$ who has some knowledge about the $latex C$. Suppose that $latex A=C$ with probability $latex 1-epsilon$ and $latex A=1-C$ with probability $latex epsilon$. Now when we condition the probability of $latex C$ on the state of $latex A$ we get a new distribution. A Bayesian would just call this an update. Suppose now that a new observer $latex Q$ has quantum information about the system. So now the conditional state of $latex Q$ is a quantum state. If we try to condition the state of $latex C$ on the quantum information $latex Q$, this becomes more interesting. For example, in quantum cryptography there is no reason to assume that the adversary is classical so it makes sense to consider this. Now let’s move on to thermodynamics. Consider a Szilard engine, i.e. a box with one particle in it and a piston that can compress to the middle of the box. If we let the “gas” expand adiabatically or compress it, which gives or takes work, then we find that the work satisfies $latex W = k T log 2$ to transition between the two states. From information theory, we know that a two-state system stores information. To erase information costs $latex k T log 2$ units of energy. Landauer showed that in fact this is optimal (Landauer’s principle), and this is independent of the physical representation of the bit. The argument is, if you could erase with a lower energy cost, then you could use the Szilard engine to extract work for free from heat (which violates the second law). If we have conditional knowledge about the state of the Szilard engine, then we can extract a different amount of work, namely $latex W = k T H(rho) log 2$ where $latex H$ is the binary entropy and $latex rho$ describes our (potentially quantum) knowledge about the state of the system. (Technical note: this is the von Neumann entropy in the asymptotic setting and the max entropy in the single-shot setting.) Consider a computer with some registers $latex R_1$ and $latex R_2$. I would like to erase them. If I erase them in sequence, then the total work required to do this would just sum the individual works using the previous formula. If instead, we did some joint operation we would only pay $latex propto H(rho_{12})$, which is in general less than the sum of the individual entropies. That is, in general we have $latex H(rho_{12}) le H(rho_1) + H(rho_2)$, so it makes sense to take advantage of the joint information. We could use instead a formula in terms of conditional entropies by just changing the formula in the obvious way. It turns out that this formula is the correct formula, $latex W = kTlog 2 H(S|mathrm{obs})$, where $latex S$ is the system and $latex mathrm{obs}$ represents the observer. If we take advantage of this conditional information, then we can serially achieve the same work as the optimal joint protocol. This is a manifestation of the so-called chain rule for entropy: $latex H(A|B) + H(B) = H(AB)$. Now let’s discuss thermalization. There is some large system (including an environment). If the total system is in a pure quantum state, how can it thermalize? What does that mean? We can look only at the subsystem, and there is a reduced density operator. Recall Alioscia’s talk for an example of some conditions for when the subsystem thermalizes. However, if two separate subsystems are separately thermal, that does not imply that the joint system is thermal. If instead we condition on the other subsystem instead, we could indeed reach this conclusion. As before, we cannot simply condition on quantum information, but the conditional entropies are well-defined. We can introduce the decoupling lemma, which guarantees that the joint state is nearly separable. It is a sufficient condition, phrased in terms of entropies conditioned on the observer, for decoupling. Namely, decoupling holds if we typically have $latex H(S|mathrm{obs}) ge 2k – n$ where the total system has $latex 2^n$ degrees of freedom and the system has $latex 2^k$. This is tight. If we apply this condition to thermalization. If the subsystem is sufficiently small, then the system will thermalize.

Tobias Osborne, The continuum limit of a quantum circuit: variational classes for quantum fields, arXiv:1102.5524 and arXiv:1005.1268, joint work with Cirac, Eisert, Haegeman, Verschelde, Verstraete.

Review of quantum fields. The most several ways one might “solve” a QFT: perturbation theory, Monte Carlo, and the variational methods. There were problems using the variational method in QFT, namely sensitivity to UV and poor reproduction of correlations. Review of MPS. Recall that MPS can be build from a staircase circuit. Review of area law result due to Hastings. As Tobias himself has proved, log-time evolution can be simulated efficiently, too. Also, a review of MERA. We can view it as a quantum circuit which takes ancillas and prepares a state. How can we pass to the continuum for these variational classes? We can use the quantum circuit preparation prescriptions of MPS and MERA and change the time step size for each gate to an infinitesimal quantity. That’s the main idea. Some nice features of cMPS: it is possible to analytically compute all the correlation functions. There is a built-in UV cutoff. There is generic clustering of correlations. Can build cMERA by alternating scaling transformations and local interactions for infinitesimal time steps, and take the limit. There is one main difference between MERA and cMERA: flexibility in the UV cutoff. The cMERA allow a smooth cutoff, but regular MERA only offer a lattice cutoff. These classes also obey entropy/area laws, rigorously in the case of cMPS, and there is a heuristic argument for cMERA. Gave an example for a simple bosonic system where the expressions are analytic for cMERA.

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 2

Day 2 of the conference. By the way, it is pretty difficult to get all this stuff written down correctly the first time through! So, if you see any mistakes please leave a comment so I can fix them.

Quantum Information in Quantum Many-Body Physics

Barbara Terhal, Adiabatic Quantum Computation and Stoquastic Hamiltonians, arXiv:0806.1746, joint work with Sergey Bravyi and quant-ph/0606140, joint with Bravyi, DiVincenzo and Oliveira.

Some quantum Hamiltonians have ground states which are amenable to Monte Carlo techniques because of the absence of the sign problem. We say a Hamiltonian is stoquastic iff, given some product basis, the off diagonal elements of each local term are real and non-positive. (One can apply local unitary operations to potentially transform some Hamiltonian into a stoquastic one.) Examples of stoquastic Hamiltonians: ferromagnets, quantum transverse-field Ising model, Jaynes-Cummings, Spin-boson and, importantly, Josephson-junction flux qubit Hamiltonians such as those engineered by D-wave for adiabatic quantum computation (AQC). Given a stoquastic Hamiltonian, we can define $latex G=I-tH$ for some sufficiently small $latex t$ and $latex G$ is a non-negative matrix. By the Perron-Frobenius theorem the largest eigenvalue eigenvector is a probability distribution. Therefore, we can potentially use quantum Monte Carlo (QMC) to sample from it. Theorem 1: Stoquastic frustration-free AQC can be efficiently simulated classically. Recall the matrix $latex G$ from before. We can sample the matrix elements of $latex G$ efficiently because the Hamiltonian is local. Since the Hamiltonian is frustration-free (FF), the ground state is annihilated term-by-term. We would like to set up a Markov chain which allows us to sample from the ground state. We can define a stochastic matrix from $latex G$ and the weights in the ground state $latex alpha_j$, namely $latex P_{jto i} = (D G D^{-1})_{ij} = frac{alpha_i}{alpha_j} G_{ij}$ where $latex D$ is the diagonal matrix with the ground state weights $latex alpha_i$ on the diagonal. This is a stochastic matrix. Because the Hamiltonian is FF, we can compute the ratio $latex frac{alpha_i}{alpha_j}$. This doesn’t seem to work in the non-FF case. Now if you iterate and there is a gap you will converge in polynomial time. In the frustrated case, we don’t know how to compute the ratio anymore. An alternative approach: we know the ground state is well-approximated by the Gibbs state at low temperature. The Gibbs matrix is non-negative if the Hamiltonian is stoquastic (but frustrated). The partition function is a sum over non-negative weights now. We can compute each of these weights efficiently, so now we want to estimate the sum with QMC. “World-line QMC.” You can use some kind of Metropolis rule for accepting and rejecting moves.

Spiros Michalakis, Stability of Frustration-free Hamiltonians, arXiv:1109.1588. Joint work with J. Pytel.

The challenge is to find a minimal set of assumptions under which gapped Hamiltonians have a stable spectral gap against weak local perturbations. Not every gapped system is stable. Consider a simple Ising model where the transverse field is a perturbation. This perturbation splits the degeneracy of the ground state. Another example is an Ising model with a defect at the origin. The lesson seems to be that local distinguishability leads to instability of the ground state degeneracy and/or the gap. Now define FF Hamiltonians (FFH). Every FFH on a lattice $latex Lambda$ is the extension of another FFH on some smaller ball $latex Bsubset Lambda$. The ground state projectors satisfy $latex P_B P_0 = P_0$ where $latex P_0$ is the global ground state projector. Now define topological quantum order (TQO). Roughly, no sufficiently local operator should be able to distinguish two orthogonal ground states. The toric code is an example of a model with TQO. But TQO is not enough for stability, as a simple counterexample shows (the “Ising toric code with defect”– it’s simpler than it sounds). Let’s define local TQO (LTQO), which says that the local distinguishability between various ground states as measured by the operator norm decays rapidly. This LTQO condition implies an area law (with a possible logarithmic correction, which is what we expect since we haven’t said anything about a spectral gap at this point). Next, we consider another condition called the local gap condition (LG). We want the gap of the local (restricted to some $latex B$) Hamiltonians have a gap which decays at most as an inverse polynomial in the size of $latex B$. Open problem: is LG always satisfied if the global Hamiltonian is a sum of local projections with FF and a spectral gap? The main theorem: If $latex H$ is a FFH satisfying LTQO and LG with sufficiently weak quasi-local perturbations, then the gap is stable and the ground state degeneracy doesn’t split too much. Gave a very brief overview of the proof. Some open problems: how can we prove spectral gaps for 2D FFH? What symmetries of the Hamiltonian imply LTQO? Can we prove LG for all FFH that are sums of local projectors?

Norbert Schuch, Complexity of commuting Hamiltonians for qubits on a square lattice, arXiv:1105.2843.

Commuting Hamiltonians can have highly entangled ground states (e.g. the toric code). They clearly have a constant spectral gap, and we only need to estimate the ground state energy to constant accuracy. Is the complexity of estimating the ground state energy easier than the full quantum problem? Main result: for qubits on a 2D square lattice, the problem of determining if the ground state is FF is in NP. First we need a structure theorem for commuting Hamiltonians due to Bravyi and Vyalyi (2003). This decomposition was used to show that 2-body commuting Hamiltonian problem is in NP. The decomposition does not generally extend to $latex k$-body for $latex kge 3$ and for qudits. However, if the connectivity is “sparse enough” then the decomposition works. Idea number one: split the lattice into “white” and “black” squares and use the fact that each layer individually can take advantage of the BV decomposition. But we don’t know if these two decompositions are compatible, so we need a new idea. We can tell if the FF ground state exists if we can find states in each layer that have non-trivial overlap… but how can we compute the overlap? We need to take advantage of the structure. The point is that for qubits the overlap can be efficiently computed. When the spins are qubits are only two possibilities for the BV decomposition. One possibility is trivial, whereas the second possibility doesn’t allow any branching structures to form, so the overlap can be computed efficiently. This proof does not provide a constructive procedure for finding the ground state.

Maarten Van den Nest, A monomial matrix formalism to describe quantum many-body states, arXiv:1108.0531.

The motivation is to generalize the Pauli stabilizer formalism in a way which still retains some of the nice properties and captures a large class of states. Some of the nice properties include: efficient description of quantum states via the stabilizers, understanding of properties of the state via manipulation of the stabilizers. We first define a monomial unitary matrix which is a permutation matrix times a diagonal matrix of phases. An M-state is one which is stabilized by (+1 eigenstate of) a set of monomial unitaries. We typically want to restrict to special cases, such as $latex k$-local, etc. This class of states encompasses e.g. stabilizer states for qudits, all quantum double ground states, AKLT model, W-states, Dicke states, coherent probabilistic computations, coset states of Abelian groups, and so on. We define the monomial stabilizer group $latex mathcal{G}$ to be the group generated by the unitaries. Assume this is finite. There is another group called the permutation group $latex mathcal{P}$ which comes from “forgetting” the phase structure and keeping only the permutation piece. We define the orbits by the orbit of the computational basis states under the action of the operators in $latex mathcal{P}$. Some simple theorems: Thm 1, all amplitudes of an M-state are zero outside the orbit. Thm 2, all nonzero amplitudes have equal modulus. If you have an entire M-space, then you can find a basis for the space where each basis state is a uniform-amplitude superposition over a single orbit. The orbits are disjoint, so the dimension of the space is bounded by the total number of orbits. Computing some properties such as sampling the state in the computational basis are NP-hard. However, there are many examples that allow an efficient classical simulation.

Hector Bombin, Structure of 2D topological stabilizer codes, arXiv:1107.2707, joint work with D. Poulin and G. Duclos-Cianci.

Topological order can be thought of as equivalence classes of many-body systems under the action of constant-depth local unitary quantum circuits. We say that a state has short range entanglement if such a circuit can map the state to a product state. Other states are said to have long range entanglement. (Wen, Gu, Chen 2009) We will consider topological stabilizer codes (TSC). These are a subset of the stabilizer code Hamiltonians, in which the local terms are stabilizer code generators. The goal of this talk is two-fold: show rigorously that local equivalence “works” as a way of classifying gapped phases and find the general structure of 2D TSC including subsystem codes. This allows us to extend which are well-known for specific models (such as RG error correction for the toric code) to other models. Let’s recall what defines a theory of anyons: topological charges (particle labels), fusion rules and braiding rules. In 2D, string operators play an important role in these models. In fact, we can recover the data specifying the anyon theory from the action of the string operators. In this talk, we are only considering qubit Hamiltonians. We only consider Hamiltonians on an infinite lattice which are translationally invariant. Then TSC are ones where the centralizer of the stabilizer (in the group of Paulis with finite support) is again the stabilizer. TSC with the same anyon model are equivalent. Since we are going to show that everything is equivalent to some number of copies of the toric code, we can get a complete invariant by considering only the topological entanglement entropy. A sketch of the proof:

  1. Topological stabilizer groups admit local, translationally invariant independent generators (only possible in 2D)
  2. there are only a finite number of global constraints
  3. the number of global constraints equals the number of charges in the anyon theory
  4. the charges are topological, so the have string operators, which means we can extract an anyon model
  5. we can rule out chirality because the Hamiltonian is commuting (there can be no energy transport along the edge, as can be seen by considering evolution in the Heisenberg picture)
  6. build a framework for string segments as in multiple copies of the toric code
  7. other stabilizers (non-plaquette) have no charge
  8. map string segments to string segments and uncharged stabilizers to single qubit stabilizers
  9. ???

(Hector, if you’re reading this, fill in #9 in the comments. You went waaaay too fast!) Now let’s move on to subsystem codes. There are some complications since there is now also a gauge group as well as a stabilizer group. Gauge charges map naturally to stabilizer charges. Although they are not generally isomorphic, these two groups have the same order. In fact, these two charge groups are naturally dual. When all is said and done, every possible anyon model on qubits is a combination of: the toric code (2 bosons, 1 fermion, semionic statistics); TSCC (3 fermions, semionic interactions, chiral); subsystem toric code (1 boson); honeycomb subsystem code (1 fermion).