QIP 2012 Day 2

Quantum Information Enters the Third Dimension

Itai Arad, Zeph Landau and Umesh Vazirani:
An improved area law for 1D frustration-free systems.

Some motivation: which quantum many-body systems can be simulated efficiently on a classical computer? Which states even have an efficient classical description (even if we might have trouble finding that description)? Can we efficiently describe their entanglement? These questions are important both theoretically and practically.

In this talk, we’ll address this question specifically focused on the case of ground states of gapped local Hamiltonians.
We care about this special case for reasons related to both physics and computer science. Local Hamiltonians are ubiquitous in condensed matter physics, and are the natural quantum version of constraint satisfaction problems in complexity theory.
For a state to be “efficient” in a meaningful sense, we should not just have a poly-sized classical description of the state. (For example, the local Hamiltonian itself implicitly gives such a description!) We should be able to efficiently calculate the expectation values of local observables.
For an arbitrary pure state, the Schmidt rank ($latex mathrm{SR}$) or the von Neumann entropy ($latex S$) can be as large as the total number of spins in one half of a bipartition. In condensed matter systems we expect that these measures of entanglement don’t have a volume scaling, but rather, a scaling proportional to the number of spins on the boundary of the region defining the bipartition. This is what is meant by an area law.
Why might you expect that an area law holds? One motivation is the exponential clustering theorem (Hastings & Koma `06, Nachtergaele & Sims `06) which says that gapped systems have connected correlation functions which decay exponentially. Namely, if $latex |Omegarangle$ is a ground state of a gapped system, then for local observables with disjoint support we have

$latex | langle ABrangle -langle Arangle langle B rangle | le c(A,B) expbigl(-mathrm{dist}(A,B), O(Delta E) bigr)$

Here the constants depend on the strength of interactions and the norms of the local observables.
But correlation is not the same as entanglement! For example, there
are data hiding states (Hayden et al. `04, Hastings `07). Bouding
the entanglement length is much harder than simply bounding
correlation because, in a sense, entanglement can “sum up” a bunch of weak correlations and possibly add up to something large.
The area law has only been proven rigorously for 1d systems (Hastings `07) and it is open in other settings. For a 1D system, the area law means that the entropy of any contiguous block is bounded from above by a constant. Some implications: the ground state has an efficient description in terms of an MPS and it explains why DMRG works so well in practice.
What about dimensions higher than one? The constants in Hastings’ proof are exponential in $latex X = log(d)/Delta E$. (Here d is local site dimension.) The best lower bounds (Hastings & Gottesman `09, Irani `09) are only polynomial: $latex S ge Delta E^{-1/4}$. Improving these constants in the 1d proof might be able to give nontrivial bounds in 2d by course graining one of the dimensions, so progress on these constants could have highly nontrivial implications.
The new result, an upper bound which has only polynomial dependence on $latex X$. The new bound is, up to polylog factors, $latex S le X^3$.
The proof techniques use Chebyshev polynomials an they might be useful elsewhere. They assume the Hamiltonian is frustration free, but this probably can be extended to the more general case of frustrated gapped local Hamiltonians.
Outline of the proof
The general strategy: start with a product state and approach the ground state without generating too much entanglement. For example, let’s assume that the projection to the ground state doesn’t create too much entanglement, as quantified by the Schmidt rank: $latex mathrm{SR}(Pi |phirangle) le D, mathrm{SR}(|phirangle)$ for some $latex D$. Then we could get a bound of $latex S le log D$ in terms of the von Neumann entropy.
We will have to use an approximate ground state projector (AGSP). It needs to have three properties (here $latex |Omegarangle$ is the unique ground state):

  1. $latex K | Omega rangle = |Omegarangle$
  2. $latex | K |Omega^perprangle|^2 le Delta | |Omega^perprangle|^2$
  3. $latex mathrm{SR}(K|phirangle) le D, mathrm{SR}(|phirangle)$

Three factors determine the bound on $latex S$: $latex D$, which measures how much entanglement we create; the shrinking factor $latex Delta$, which determines how fast we approach the ground state with each successive application of $latex K$; and $latex mu$, the overlap between the initial product state and the ground state.
Lemma: $latex S(Omega) le O(1) dfrac{log mu}{log Delta} log D .$
When $latex D Delta < frac12$, then there is a product state with big overlap with the ground state ($latex mu$ isn’t too small). This is the bootstrap lemma. As a corollary, then for any AGSP we have an area law.
Now to prove the area law, we need to actually find a good AGSP. If it is frustration free, we can consider local projectors w.l.o.g. Then we can use the detectability lemma to get the AGSP.
The idea of the detectability lemma is to split the projectors into two layers, the product projector onto the even and odd terms in the Hamiltonian, respectively. Let $latex A = Pi_ePi_o$. Then we have

$latex |A |Omega^perprangle|^2 le (1+Delta E/2)^{-2/3} | |Omega^perprangle|^2.$

Therefore, A is an AGSP and the Schmidt rank increases by at most $latex D = d^2$ and shrinkage factor satisfies $latex Delta = (1+Delta E/2)^{-2/3}$. But we need a better AGSP to get an improvement to the area law.
Main idea: “dilute” A a bit so that we don’t let the entanglement accumulate too much. Can we dilute A so that the Schmidt rank decreases, but without worsening the shrinking factor by too much?
We define a new operator $latex mathbb{N} = sum_i Q_i$. It counts the number of constraint violations in a particular layer. It is a polynomial of degree m (the number of terms in the layer). Can we approximate it with a lower degree polynomial? Yes, and this is where the Chebyshev polynomials enter. Taking the Chebyshev polynomial of degree $latex sqrt{m}$ gives a good approximation, but with a much lower degree. But now the number of terms is large.
We have to bound the number of terms in the sum. The bound is $latex t = expbigl(O(log^2(q sqrt{m}) ellbigr)$. The overall Schmidt rank is at most $latex D^ell$.
Now we have to glue everything together. We still have to initially course grain the system a bit to deal with the case when $latex Delta D > 1$. (Recall, we need this product to be less than one half in order to get the first lemma to work).
Open questions: efficient representation for ground states of gapped local Hamiltonians in higher dimensions? Area law for higher dimensions? Can the proof be generalized to the frustrated case? Can this idea of “dilution” be used elsewhere?

Spyridon Michalakis and Justyna Pytel:
Stability of Frustration-Free Hamiltonians

I already blogged about this work (twice!) so I’ll just refer to that.

Toby Cubitt, Martin Schwarz, Frank Verstraete, Or Sattath and Itai Arad:
Three Proofs of a Constructive Commuting Quantum Lovasz Local Lemma

The Lovász Local Lemma (LLL) says that if several events are “mostly independent”, then the probability that none of these events occurs is still strictly positive. The constructive version of the LLL, whose most recent instantiation is due to Moser and Moser & Tardos, shows that the simplest algorithm you can imagine converges to a solution: just start with a random assignment and then check random constraints which are violated, fix them with a random assignment, and repeat until you reach a satisfying assignment. The quantum version of the LLL replaces classical “events” with projectors, but the current version is nonconstructive. Can we find a constructive proof in the quantum case?
There are two challenges: entanglement and the measurement-disturbance tradeoff. This talk will focus on the challenge of entanglement, since this challenge is present even in the presence of commuting Hamiltonians.
In the simplest generalization of Moser & Tardos (probabilistic version) to the quantum case, we just replace local qubits with the maximally mixed state. This is the Verstraete et al. `09 dissipative map, so we know it converges to the correct state, but we can’t (yet) bound the convergence rate.
Key lemma: We can bound the probability of generating a particular sampling sequence, i.e. a particular directed acyclic graph (DAG). Next, we can bound the expected number of violations, then use this as a subroutine in an efficient algorithm. This allows us to bound the runtime.
Second approach: use the original entropy compression argument of Moser. This goes through without too much effort. The algorithm halts in expected linear time.
What about the noncommutative case? Now there are two complementary problems: in the first proof, we can no longer bound the runtime. In the info-theoretic second proof, there is no guarantee that it halts on the correct state (though it definitely will halt). There are two conjectures which, if true, would enable a noncommutative constructive QLLL. See the paper for details.
Toby, you went way too fast for me to keep up, sorry.

Norbert Schuch:
Complexity of commuting Hamiltonians on a square lattice of qubits

I also blogged about this one previously.

Josh Cadney, Noah Linden and Andreas Winter (contributed talk):
Infinitely many constrained inequalities for the von Neumann entropy

The speaker addresses a central question which affects all of us: Why is this talk better than lunch? Answer: it’s different; it’s introductory.
The von Neumann entropy satisfies many inequalities, for example, positivity and the strong subadditivity inequality. In this talk, we are interested only in inequalities based on reduced states that are dimension independent.
Define the entropy vector: for any subset of systems $latex alpha$, we define reduced state on $latex alpha$, and the associated entropy. Stack all of these entropies into one big vector. If we fix the number of parties to be n, then we let $latex Sigma_n$ be the set of all entropy vectors of n-partite states. We don’t make restrictions on the local Hilbert space dimension. We want to characterize this entropy vector (as a function of n). Since the von Neumann entropy tells us about bipartite entanglement, than this might tell us something about multipartite entanglement.
If we look at all $latex Sigma_n$, then we get a convex cone once we take a closure. Call this the entropy cone. What sort of inequalities do we know which characterize this entropy cone?

  • For one party, we know positivity.
  • For two parties, we know subadditivity and triangle inequality.
  • For three parties, we know strong subadditivity and weak monotonicity.

Do these existing inequalities tell us everything we need to know about the entropy cone? To check this, we look at the extremal rays and try to find an entropy vector there. For n=2, we find that there are no more inequalities: the above list is exhaustive. For three parties, we find four families of rays, and again there are no more inequalities.
For four parties, we can continue this program. The extremal rays are stabilizer states! Are there any more four party inequalities? We don’t yet know. To help address the question, we first look at the classical case (Shannon entropy). In the classical case we additionally have (non-weak) monotonicity. Playing the same game in the classical system, we learn from previous work that the four-party cone isn’t polyhedral: there are an infinite number of entropy inequalities.

New result: given a constraint on the mutual information $latex I(A:C|B) = I(B:C|A) = 0$, then there are new inequalities, and additional inequalities each time we add a new party. See the paper for details. The proof elaborates on a classical proof by Makarychev et al. from `02. Each of the inequalities is independent: every new inequality sharpens the bound on the entropy cone in a different way than the others. They have not been able to violate a single one of the new classical inequalities numerically.

Jeongwan Haah:
Local stabilizer codes in three dimensions without string logical operators

The general problem is to find a noise-free subspace/subsystem of a physical system on which manipulations are still possible. For a many-body system, local indistinguishability (aka topological order) is stable at zero temperature, but might be fragile at nonzero temperature. Here we are focusing on the case of memory only (so, only in the ground state).

First a review of the toric code. Because the logical operators are string-like, the toric code is not stable at nonzero temperature. This talk will try to build a code which doesn’t have this drawback.

The code is defined on a cubic lattice with 2 qubits on each vertex. There are no string-like logical operators in this model. This system has an exotic ground state degeneracy which depends on the system size.

Some general considerations:

  • Simple cubic lattice with m qubits per site
  • t types of cubic interactions with code space dimension > 1
  • No single-site logical operators
  • No obvious string operator

To simplify things:

  • t=2: one for X and one for Z type Paulis
  • Product of all terms in the Hamiltonian should be Id
  • Logical operator on a site should be Id
  • Logical operator on a straight line should be Id

These conditions can be translated to linear constraint equations on the symplectic vector space of the Abelianized Pauli group. There are 64 commutation relations on a single unit cell of a cubic lattice which determine everything and there are 27 independent linear constraints (from the above considerations). There are 37 free variables, and 30 remaining equations. If you brute force solve everything, then you find there are 17 solutions.
The first code has a Z<–>X symmetry and looks like this:
Some questions: Is this model topologically ordered? What do you mean by a “string”?
To answer the first condition, use the local topological order condition from Bravyi, Hastings, Michalakis. That is, observables with local support should not be distinguishable in the ground state. One can check this condition and it is satisfied. The computation simplifies if you take advantage of the symmetries.
Let’s get a rigorous definition of “string”. This is a key contribution of this work. A string is a finite Pauli operator which creates excitations at at most two locations. Define an “anchor” to be an envelope around the excitations. The “width” is the size of the anchors, the “length” is the distance between the anchors. There are no geometric restrictions. We need to tweak this definition a bit: we need a notion of a “trivial” string segment to rule out some annoying counterexamples.
The no-strings rule: String segments of width w and length L > c w are always trivial (where c is some constant). The above code satisfies the no-strings rule with c=15.
One can prove this using the same techniques developed above to check the local TQO condition. Jeongwan calls this the eraser technique.
To understand what the no-strings rule means, you have to develop some algebraic tools. Physically, it means you can’t drag defects; you can only annihilate them and then recreate them elsewhere. Jeongwan spent some effort describing these tools, but it was way too much for me to write all of it down (well, at least if I wanted to understand any of it). You’ll have to look at his papers for details.
Jeongwan specifically showed how both the local TQO and no-strings rule could be cast in the language of algebras of polynomials to give simple proofs for the case of his codes.

Andrew Landahl, Jonas Anderson and Patrick Rice:
Fault-tolerant quantum computing with color codes

Haah’s nice code makes me want to think more about 3d
But my 3d glasses here (puts on 3d color glasses) can make 2d color codes look 3 dimensional (see top of post.)
Consider color codes. One of the codes (the square-octagon lattice code, aka the 4,8,8 lattice) has a transversal S gate.
The control model: there is a faulty gate basis consisting of X, Z, H, S, $latex S^dagger$, CNOT, measurements on X and Z, and state preparations on 0, + and $latex pi/4$. Additional assumptions: parallel ops, refreshable ancillas, fast classical computation, equal-time gates.
Locality assumptions: 2D layout, local quantum processing.
We also consider two error channels: bit flips and double Pauli (DP).
Consider three models of error:
Circuit based noise model:

  • each preparation and one-qubit gate followed by BP(p)
  • Each CNOT gate followed by DP(p)
  • Each measurement preceded by BP(p) and result flipped with prob p

Phenomenological noise model:

  • Same, except each syndrome bit extraction circuit modeled as a measurement that fails with prob p. This ignores noise propagation between data and ancilla.

One other noise model is the code capacity noise model. See paper for details. 🙂
Decoders and thresholds: Can use an MLE decoder or an optimal decoder (which might be slow). The threshold seems to be around 10.9% for the optimal decoder, and not much worse for the MLE decoder: Some numerical results (depending on model) are 10.56%, 3.05%, $latex 8times10^{-4}$%

(An aside.) Random-bond Ising model: Consider an Ising model with random signs on the two-body couplings. Every CSS code has an associated random-bond Ising model. The Nishimori conjecture (which is false!) said that you can’t increase the order by increasing the temperature.
Details on syndrome extraction: One can use sequential schedules or interleaved schedules. We need to make sure that the code checks propagate to themselves, and syndrome checks propagate to themselves multiplied by a stabilizer element. They found an interleaving schedule that works.
Decoding: Use the code-capacity MLE decoder (works for all CSS codes). They formulate it as an integer program over GF(2), which can be rewritten as an integer program over the reals. (This isn’t efficient (NP-complete in general).)
Showed some pictures of the thresholds.
Turns out the interleaved schedule doesn’t have much impact… it’s about the same threshold (within error) as the sequential schedule.
Also showed some rigorous lower bounds which come from a self-avoiding walk.
Architectures and computation: can imagine a pancake architecture which does gate transversally between layers. How does this change the threshold?
Good point: Injection of magic states into codes hasn’t really been studied in the case where the injection is error-prone. This would be a good topic for future work.

Guillaume Duclos-Cianci, Héctor Bombin and David Poulin:
Equivalence of Topological Codes and Fast Decoding Algorithms

Note that I blogged about this previously.
Two gapped systems are in the same phase if there are connected by a local unitary transformation, meaning there is constant-depth quantum circuit (CDQC) which transforms one into the other.[Wen et al., 2010, Schuch et al. 2010] What makes two codes equivalent? Under what conditions does a CDQC exist?
Consider two copies of the Kitaev toric code (KTC). Does this create a new code? No… However, you could imagine applying a CDQC to the pair of codes which masks this product code. Thinking in reverse, you could ask which codes have such a character, namely, that they are loosely coupled versions of multiple copies of the KTC.
The main result is (roughly) that every 2D stabilizer code on qubits is equivalent (modulo CDQC) to several copies of the KTC, and moreover the mapping can be constructed explicitly.
Some useful concepts. Topological charges are equivalence classes of excitations which can be fused within a fixed bounded region back into the vacuum configuration. Excitation configurations are collections of defects inside a region. String operators move defects around.
The result applies in the following setting. They consider 2D lattices of qubits with geometrically local, translationally invariant stabilizer generators. To have a topological-like condition, they ask that the distance of the code scales with the linear size of the lattice ($latex d sim L$).
Gave a sketch of the proof… too detailed to blog in real time. See the paper for details.
Application to quantum error correction (QEC): The decoding problem is the same as trying to infer the worldline homology class from the measured particle configuration. You can decode, say, the 4,8,8 color code by deducing a mapping of the charges, then deducing an effective noise model on the KTCs, and then applying your favorite decoding algorithm for each KTC.
They did some numerics and found 8.7% fault-tolerance threshold for the 4,8,8 code using the RG decoder.

Joseph M. Renes, Frederic Dupuis and Renato Renner (contributed talk):
Quantum Polar Coding

Joe started off by disappointing the audience with an announcement that his all-ETHZ team had written a paper without any smooth entropies.
Instead, he talked about quantum polar codes, which are explicit, channel-adapted block quotes (CSS codes, in fact), with linear-time encoding and decoding, and achieve the symmetric (see below) coherent information for Pauli channels. They appear to need entanglement assistance, but maybe not? Details to follow.
The key idea is channel polarization. If we have two independent bits $latex X_1, X_2$ going into two bit channels with outputs $latex Y_1, Y_2$, then we can do a change of basis on the input to $latex U_1 = X_1oplus X_2, U_2 = X_2$. We think of this as a channel from $latex U_1$ to $latex Y_1Y_2$, together with a channel from $latex U_2$ to $latex Y_1Y_2$ that uses our imperfect knowledge of $latex U_1$ as side information. The first channel has less mutual information than our original channel, and the second channel has more. Hence we have “polarized” the channels, essentially taking the mutual information from one use and giving it to a different one.
The basic relation is:
$latex I(U_1 ; Y_1 Y_2) + I(U_2 ; U_1Y_1Y_2) = 2 I(W)$
where I(W) refers to the mutual information of the original channel.
If we keep doing this recursively, we can continue until we get a bunch of nearly-perfect (good) channels and a bunch of nearly-worthless (bad) channels. Remarkably, the fraction of good channels is close to the input capacity given uniform inputs (which we call the “symmetric capacity”). To encode, we send messages over the good channels, and freeze inputs to bad channels (both parties know which ones are which). This requires only $latex nlog n$ CNOT gates. You can decode sequentially using ML; the recursive structure of the code makes ML efficient.
The quantum version just uses the fact that the transformation is good in the conjugate basis as well! You just turn the circuit “upside-down” (CNOTs reverse the target and control). For a Pauli channel, the polarization occurs in both amplitude AND phase, albeit in different directions. We only need to analyze amplitude and phase in order to verify that a channel can send general quantum states (see Christandl & Winter 2005 for the precise statement).
Some channels are going to be good for both amplitude and phase, so we use those to send entanglement. For the ones which are bad on amplitude, we freeze the amplitude, and similarly, for the ones which are bad on phase, we freeze the phase. We can use preshared entanglement for the channels which are bad on both amplitude and phase.
To decode, just successively and coherently decode both the channels. Reuse the classical decoders. Decode first amplitude, then phase conditional on the given amplitude. You can achieve the (symmetric) coherent information as the net rate.
From the numerical simulations, (see picture) it seems that this entanglement assistance is never actually needed!
In fact, for channels with low enough error rates they can show this rigorously. The proof techniques are similar to the original polar code proof techniques and use channel martingales.
One open question is to resolve whether or not the entanglement assistance is really necessary. Also, can they extend this to other channels? Yes! It works (joint work in progress with M. Wilde.) Plus, you can still do sequential decoding.

Sergey Bravyi and Robert Koenig:
Disorder-assisted error correction in Majorana chains

Consider, if you haven’t already, the toric code Hamiltonian $latex H_0$ given by minus the sum of toric code stabilizer generators. Now perturb it lightly by adding local Z fields: $latex V= -msum_i Z_i$. What does the storage fidelity look like (at zero temperature) as a function of time? That is, what happens if we encode an adversarially chosen pair of qubits, wait a time t, and then try to recover the qubits? Let Tstorage denote the longest time we can wait and still get 99% fidelity.
The bad news: $latex T_{rm storage} = O(log N)$.
Crazy idea: Add randomness!
The idea is that the perturbation creates particles, which then hop around, creating strings that quickly lead to logical errors. But adding a little disorder can slow them down via the phenomenon of Anderson localization! In fact, the eigenfunctions can be exponentially localized. This is a nice example of a phenomenon in which something that sounds like a negative result—disorder making quantum particles move even more slowly than diffusion—turns into a postiive result, when we can use it to limit our old enemy, Eve.
Challenges: need it to work for many particles, and in 2-D. This is intractable to simulate classically.
So… instead we consider a 1D model that we can analyze, viz. unpaired Majorana modes in quantum wires (due, of course, to Kitaev).
The Hamiltonian is a sum over three types of terms: a chemical potential ($latex a_j^dagger a_j$), a hopping term ($latex a_j^dagger a_{j+1} + h.c.$), and a Cooper pair term ($latex Delta^*a_i^dagger a_j^dagger + Delta a_ia_j$). The base Hamiltonian has a 2-fold degenerate ground state (thus protecting a qubit). With respect to Majorana operators $latex c_j$, the Hamiltonian becomes the super simple:
$latex frac{J}{2} sum_{j=1}^{N-1} i c_{2j} c_{2j+1}$
The perturbation is $latex V = sum_{j=1}^{N-1} i mu c_{2j-1} c_{2j}$.
The resulting error-correction procedure resembles the minimum-weight matching procedure for the toric code, but is much simpler because it works on a line.
This code resembles the lowly repetition code, but actually has nontrivial distance.
Now what happens if the perturbation has randomly-varying weights. That is, $latex V = sum_{j=1}^{N-1} i mu_j c_{2j-1} c_{2j}$ for random $latex mu_j$.
The expected storage time results can be summarized in the following table.

no disorder disorder implications
weak or no perturbations, with small N $latex exp(Omega(N))$ $latex exp(Omega(N))$ not so nice…
strong perturbations, with small N $latex O(log N)$ $latex Omega(N)$ nice…
large N $latex O(log N)$ $latex exp(Omega(N))$ very nice!

Proofs are a mix of rigorous proofs and simulation.
Dynamical localization resembles that obtained for the 1-paricle Anderson model.
$latex c_q(t) = sum_p R_{p,q}(t) c_p$ with $latex |R_{p,q}(t)| leq e^{-|p-q|/t}$.
That’s only for one particle, though. They actually need to prove a bound on the expected determinant of submatrices of $latex R$. Using this, they can show the fidelity remains high for an exponentially long time. This is interesting not only because of its implications for quantum memory, but because understanding multi-particle localization is a tough problem in general.
The proof uses quasi-adiabatic continuation for free fermions to argue that a superposition in the ground space is mostly preserved. Or more precisely, that the only damage occuring to it consists of correctable errors. The techniques look important and interesting. Properties such as gap stability that they prove might of interest elsewhere.
But they don’t solve everything: in some cases, they need numerical simulation. They use a number of tricks to tame the exponentials that would come from naive algorithms; in particular, taking advantage of the fact that they are working with fermionic Gaussian states (building on, and improving, an old result of Terhal and DiVincenzo).

QIP 2012 Day 1

Sergey Bravyi, Topological qubits: stability against thermal noise.
Joint work with Jeongwan Haah, based on arXiv:1105.4159.


Sergey opened his talk with an image of a bagel in a Solfatara landscape, meant to evoke topogically-protected qubits (get it? a bagel?) in a thermal (steam) bath.
The question is whether we can engineer an energy landscape with high barriers between encoded states so as to enable passive quantum error correction; i.e. to create a topologically protected qubit. Here are the main ingredients:

  • Stabilizer codes on lattice with local generators and code distance $latex dapprox L$, where $latex L$ is the lattice size.
  • A code Hamiltonian with energy penalty for states outside logical subspace. The Hamiltonian is defined simply by having a violated stabilizer cost unit energy.
  • Thermal noise: should be described by a Markovian master equation with local jump operators. The spectral density should obey detailed balance; this will be the place where temperature enters.
  • Finally, there should be a decoder. Here we consider the RG (renormalization group) decoder.

Kitaev’s toric code (1997) was the first example of a topological QECC. If you’re not familiar with it, then you should read about that instead of the current results. The key idea is that the errors are strings that wrap around the torus, and so have length at least $latex L$. However, since a partial string incurs only a constant energy penalty, the memory time (=amount of time until the state degrades by a constant amount) is $latex e^{-2beta}$ [arXiv:0810.4584] independent of the lattice size! This is pretty disappointing; we can get this performance by not encoding the qubit at all.
Indeed, any 2D topological stabilizer code has constant energy barrier [0810.1983], which implies that relaxation to thermal equilibrium occurs at a rate independent of system size $latex L$.
This no-go result is daunting, but in higher dimensions we can still achieve some nontrivial topological protection. Here’s what we want: A topological stabilizer codes in a D-dimensional array with commuting Pauli stabilizers $latex G_a$, each with support of size O(1), and with each qubit acted on by O(1) stabilizers. The ground space of the corresponding Hamiltonian is the set of states that are +1 eigenvectors of every stabilizer. We would like the distance to grow at least linearly with lattice size; such models have intrinsic robustness.
Fortunately, there is a shining example of a spatially local stabilizer code with all the self-correcting properties we want. Unfortunately, it requires D=4; it is the 4-d toric code. Here is its energy landscape.

Defects are loops around error membranes, and any sequence of single-qubit Pauli errors mapping a ground state to an orthogonal one must create roughly $latex L$ defects at some step. Thus the memory time is exponential in $latex L$ for low $latex T$ [0811.0033].
All 3D stabilizer Hamiltonians studied so far have constant energy barrier. Indeed this is true of all 3D codes that are invariant under translation and “scale invariance” [1103.1885]. (Here, scale invariance means that the ground state degeneracy doesn’t depend on the size of the lattice.) Haah’s breakthrough 1101.1962 is the first example of a 3-d stabilizer code without string-like logical operators. In his model, defects cannot move very far.
Our results give self-correcting properties of any topological stabilizer codes that obey the no-strings rule, e.g. Haah’s codes.

  1. Their energy barrier is $latex Omega(log L)$
  2. They have partial self correction. Specifically, the Arrhenius law together with the logarithmic energy barrier gives lifetime $latex L^{cbeta}$, for any inverse temperature $latex beta>0$, and for $latex Lleq L^*$ for some critical value $latex L^*sim e^{beta/3}$. See figure:

This lifetime reaches a maximum value (as a function of L) when L is proportional to $latex e^{beta/3}$. Thus, choosing L optimally results in a memory lifetime of $latex T_{mathrm{mem}} sim e^{cbeta^3/3}$.
One example of a code with no string-like errors is the 3-d cubic code of Haah. It is obtained by placing two qubits at each site of a cubic lattice and tiling the following stabilizer generators. See figure (from the Bravyi-Haah paper).

The noise model: One can get a Markovian noise model by considering the Davies weak coupling limit, where the Lindblad operator $latex A_{k,omega}$ transfers energy $latex omega$ from the system to the bath. The spectral density $latex r(omega)$ obeys detailed balance: $latex r(omega) = exp(-beta omega) r(omega)$. Note that this is the only place where temperature enters.
The decoder: After measuring the syndrome, the correction operator is found using ideas from Jim Harrington’s thesis [2004; I can’t find it online] and the Renormalization Group decoder from 1006.1362. See photo.
The strategy is to:

  1. Find connected clusters
  2. For each connected cluster C find the min enclosing box and try to annihilate C by an operator acting within the box.
  3. Stop if no defects left.
  4. If some defects are left, then increase the length by factor of 2 and repeat.

The RG decoder stops when all defects have been annihilated or when length reaches the lattice size, in which case we have failure. We can also have failure when all defects have been annihilated but the system has been returned to a wrong ground state.
The RG decoder can be implemented in time poly(L) (and can be parallelized to run in time log(L)).
We can define the memory time as follows. We time evolve according to the master equation given the local quantum jump Lindblad operators. After some time $latex t$, we run the decoder. Then we check the worst-case (over all initial states) of the trace distance between the initial and final state.
$latex | mathcal{D}(rho(t)) – rho(0) |_1 le O(t) N 2^k exp(-beta m) (1+exp(-beta))^N.$
Here N is the number of physical qubits, k is the number of logical qubits, t is the time, $latex beta$ is inverse temperature and m is the number of errors that can be corrected by the decoder (or more precisely, the height of the energy barrier.) Essentially, there is a temperature/energy factor that competes with an entropic factor.
Analyzing it we find that the memory time grows greater than $latex L^(c beta)$ as long as $latex L le exp(beta/3)$
This gives large but finite lower bound on memory time at a fixed temperature.
Is it optimal? Maybe the analysis is pessimistic and the Haah code actually has an exponentially large lifetime? Maybe growing L can result in unbounded memory time when $latex beta$ is sufficiently large? The authors carried out a Monte Carlo simulation of Haah code with only X errors to test these theories. It used 1000 days of CPU time on IBM’s Blue Gene.
The plots are qualitatively similar to the lower bounds on memory time. In particular, they show that the maximum memory time scales quadratically with $latex beta$ and that the memory time for fixed temperature increases with $latex L$ and then starts to decrease.
How do the proofs go?
Idea 1: The no-strings rule implies localization of errors. That is, any error E can be written as $latex E=E_{loc} cdot G$ where G is a stabilizer and $latex E_{loc}$ is supported on at most 2 neighborhoods.
In order for the accumulated error to have large weight at least one intermediate syndrome must be nonsparse, with one pair of defects within distance a of each other.
Idea 2: Uses scale invariance of the nostrings rule.
Define sparseness and denseness at different scales. A syndrome which is dense at p consecutive scales will include a cluster of p defects.
Show that to implement a logical operation, at least one intermediate syndrome must be dense at roughly log(L) differnet scales.

Mark Wilde,
Advances in classical communication for network quantum information theory, based on 1109.0802 and 1102.2624

Consider the problem of an interference channel, with two senders and two receivers. This problem is hard enough in the “ccqq” case, meaning that the channel takes two classical inputs to a bipartite quantum output: i.e. $latex x_1,x_2 rightarrow rho_{x_1,x_2}^{B_1B_2}$. This is depicted here (figure taken from their paper).

Along the way to understanding this, we’ll need to consider a multiple-access channel, with two senders and one receiver. The achievable rate region is in a sense an optimistic one, bounded by the obvious constraints:

$latex R_1 leq I(X_1;B|X_2)$
$latex R_2 leq I(X_1;B|X_2)$
$latex R_1 + R_2 leq I(X_1X_2;B)$

Why are these obvious? The first two constraints are what we would obtain if the receiver has successfully decoded one message, and both parties get to use that information to design a joint encoder-decoder pair. The last constraint is what we would get if the two senders could work together, apart from the requirement that their average input to the channel should be product across the two senders. Thus, these are natural upper buonds. The nonobvious part is that this can be achieved.
The coding theorem was obtained by Andreas Winter, when he was a grad student (quant-ph/9807019). How does it work? First, we observe that the desired rate region can be obtained as a convex combination of the “corner points” of the rate region: $latex (I(X_1;B), I(X_2;B|X_1))$ and $latex (I(X_1;B|X_2), I(X_2;B))$.
To achieve one of these corner points, use the fact that the two senders are sending uncorrelated inputs, so we can average over the inputs of one sender (say $latex X_2$) to obtain an effective single-sender/single-receiver channel, for which coding is possible at rate $Iatex I(X_1;B)$. Of course, measurement is potentially destructive, but since we are operating in the regime of very low error, we can use the fact that measurements with nearly certain outcomes cause very little damage (the infamous “Tender Measurement Lemma”). (Ok, it’s mostly called the “gentle measurement lemma” but I like “tender.”) Thus, the decoder obtains $latex X_1$ and, conditioned on it, can decode $latex X_2$. (Clearly the sender knows $latex X_1$ as well. Note that this is no longer true for the multiple-sender case.)
Sequential Decoding:. To do decoding, we need to use the typical projector for the average state, as well as the conditionally typical projectors for codewords. The HSW idea is to use the pretty good measurement. The idea of sequential decoding is, in analogy with the classical idea, is to check each codeword sequentially. It works out pretty similarly, with the measurement operators in both cases being lightly distorted versions of the conditionally typical projectors.
The crucial lemma making this possible is from 1109.0802, by Pranab Sen. He calls it a “noncommutative union bound”. The statement is that

$latex 1- mathrm{tr}(Pi_1cdotsPi_n rho) le 2 sqrt{sum_{i=1}^n mathrm{tr}((1-Pi_i) rho)}$

Successive decoding: In general, we’d like to analyze these problems of multipartite information theory using the traditional single sender-singler receiver tools, like HSW. Since each individual code works with exponentially small error, and the gentle-measurement Lemma states that decoding causes exponentially small damage, we should be able to compose several protocols without much trouble. The only subtlety is that the typical projectors don’t commute, which is where the noncommutative union bound comes in. We apply it to the following “intersection projectors.”
$latex tildePi_{x_1^n, x_2^n} leq Pi Pi_{x_1^n} Pi_{x_1^n, x_2^n} Pi_{x_1^n} Pi$
Most important open problem: find a general three-sender quantum simultaneous decoder. Solving this should hopefully yield the insights required to handle an unlimited number of senders.

Nilanjana Datta,
Quantum rate distortion, reverse Shannon theorems and source channel separation,
joint work with Min-Hsiu Hsieh, Mark Wilde, based on 1108.4940.

Compression and transmission: the source coding and noisy coding theorems of Shannon. The fundamental limit on compression is the entropy.
Lossless compression is often too stringent a condition. For example, consider jpg compression, which gives faithful images (to a human eye) despite throwing away lots of bits of information. We consider Rate Distortion Theory, i.e. the theory of lossy data compression. We are interested in the maximum rate of compression given a fixed maximum amount of distortion. Define the rate distortion function:
$latex R_c(D) =$ minimum classical asymptotic cost for sending many copies of state $rho$ with per-copy distortion $latex leq D$, where $latex c$ is for classical.
For a fixed value of the distortion function $latex 0le D < 1$, we work in the storage setting. Define a quantum rate distortion function in the asymptotic limit.
Barnum (in quant-ph/9806065) proved the first lower bound on the quantum rate distortion function:
$latex R_q(D) ge min_{S_rho(N,D)} I_c(rho,N)$
where the term on the right is the coherent information and $latex S_rho(N,D) = {N:mathrm{CPTP} : d(rho,N)le D}.$ But, the coherent information can be negative, so it can’t be tight lower bound in all cases.
Now move to the communication setting. Can define the entanglement-assisted quantum rate distortion function. They can find a single-letter formula for it, given by $latex min_{S_rho(N,D)} frac12 I(R:B)_omega$. (In terms of the quantum mutual information.) This is both a lower bound and it is achievable by using quantum state splitting. Alternatively, the achievability follows from the quantum reverse Shannon theorem (QRST).
Unassisted quantum rate distortion function can also be found by the using the QRST. Need to use a regularized formula.

Jon Yard,
Quantum communication with Gaussian channels of zero quantum capacity,
joint work with Graeme Smith and John A. Smolin, based on 1102.4580.

The title on arxiv.org is “Gaussian bosonic synergy: quantum communication via realistic channels of zero quantum capacity”. Realistic? Synergy?? Think about this, kids, before you go off to work in industrial research.
This paper concerns the fascinating topic of channels with zero quantum capacity. For zero classical capacity, these channels are simple to describe. Here is one example.

But zero quantum capacity occurs even for channels whose output depends on the input. For example, consider the completely dephasing channel, aka a classical channel. Obviously this has no classical capacity. The fact that this channel has zero capacity is both because it is anti-degradable (meaning that Eve’s output can simulate Bob’s; this implies 0-capacity by no-cloning) and because it is PPT, meaning it maps every input state to a PPT output state, or equivalently, its Jamiolkowski state (obtained by feeding in half of a maximally entangled state) is PPT. Sadly, these two conditions are still pretty much our only examples of zero-capacity channels. See a previous talk of Graeme’s for a framework that could potentially expand this set of conditions.
Let’s talk a bit more about those two examples. The anti-degradable channels are intuitively those that give more to Eve than Bob. So erasure channels with erasure rate >50% count, attenuation channels (for photons; can be modeled by a beamsplitter) and certain depolarizing channels (using an argument due to Brandao, Oppenheim, and Strelchuk http://arxiv.org/abs/1107.4385). On the other hand, PPT channels are at least easy to calculate, and include some channels with zero private capacity.
The general question of characterizing zero-capacity channels is a very interesting one, and one that it’s not clear if we are up to. But here is a nice specific version of the problem. The capacity of the depolarizing channel drops to zero at noise rate $latex p^*$, where $latex 0.2552 leq p^* leq 1/3$. What is $latex p^*$???
A good quote from Jon.

Superactivation demonstrates that asking about the capacity of a quantum channel is like asking about a person’s IQ: one number isn’t enough to capture everything you might want to know.

The main result: There exist Gaussian channels, each with zero quantum capacity, that can be combined to achieve nonzero capacity. These channels are more or less within the reach of current experiments (not sure about all the decoding), requiring about 10dB of squeezing and about 60 photons/channel. This is interesting in part because Gaussian channels seemed to be so well-behaved! For example, there is no NPT bound entanglement in Gaussian channels.
Infinite dimensions: This paper works in infinite dimensions, which many in our field are inherently uncomfortable with, and others are inexplicably drawn to. Like representation theory, or complexity classes with lots of boldface capital letters, the presence of phrases like “Quasi-free maps on the CCR algebra” can be either a good or a bad thing, depending on your perspective.
Q: For those of us who of prefer finite dimensions, can we truncate the Hilbert space, perhaps by taking advantage of a constraint on the total number of photons?
A: In principle yes, but then the channels are no longer Gaussian, so our analysis doesn’t work, and in general, Gaussian things are easier to analyze, so that is a sense in which infinite dimensions can actually be simpler to deal with.

Runyao Duan,
Bounds on the distance between a unital quantum channel and the convex hull of unitary channels, with applications to the asymptotic quantum Birkhoff conjecture,
joint work with Nengkun Yu and Quanhua Xu.

The story behind this work starts with the Birkhoff-von Neumann theorem (often called Birkhoff’s theorem) which states that doubly stochastic matrices (matrices with nonnegative entries and all rows and columns summing to one) are convex combinations of permutations. An analogous claim for quantum channels is that unital channels (i.e. mapping the maximally mixed state to itself) are mixtures of unitaries. However, this claim is false. More recently, Smolin, Verstraete and Winter conjectured in quant-ph/0505038 that many copes of a unital channel should asymptotically approach convex combinations of unitaries. (The rest of their paper contained evidence suggestive of this claim, and in general has nice results that are an important precursor of the merging revolution in quantum information theory.) This conjecture was called the Asymptotic Quantum Birkhoff Conjecture (AQBC) and was discussed on Werner’s open problems page. A few years later, it was shown to hold for everyone’s favorite unital-but-not-mixture-of-unitaries channel (What’s that? The one with Jamiolkowski state equal to the maximally mixed state on the antisymmetic subspace of $latex mathbb{C}^3 otimes mathbb{C}^3$ of course!) by Mendl and Wolf.
Sadly, the AQBC is also false, as Haagerup and Musat recently proved. However, their paper uses von Neumann algebras, and their abstact begins

We study factorization and dilation properties of Markov maps between von Neumann algebras equipped with normal faithful states, i.e., completely positive unital maps which preserve the given states and also intertwine their automorphism groups.

(Another approach, in preparation, is due to Ostrev, Oza and Shor.)
This raises a new open question: is there a nice simple proof that finite-dimension-loving quantum information people can follow without having to learn any new math? (Note: probably we should learn more about von Neumann algebras. Just like we should make our own pizza from scratch. But if there’s a good pizzeria that delivers, I’d still like to hear about it.)
This result delivers, by disproving the AQBC with an elegant, intuitive proof. The key technical lemma is the kind of tool that I can imagine being useful elsewhere. It states that, for the problem of distinguishing two convex sets of quantum channels, there exists a single measurement strategy that works well in the worst case. This builds on a similar lemma (due to Gutoski-Watrous ‘04 and Jain ‘05) is just minimax applied to state discrimination: If we want to distinguish two convex sets of density matrices, then there exist a single measurement that works well in the worst case. How well? Its performance is at least as good as what we get by choosing the worst pair of states from the two convex sets, and then choosing the optimal measurement based on these.
This paper extends this result to sets of quantum channels. The optimal distinguishability of a pair of quantum channels is given by their diamond-norm distance, which is simply the largest trace distance possible between their outputs when applied to one half of some (possibly entangled) state. So when you want to distinguish convex sets of channels, then they use minimax again to show that there exists a measurement strategy (this time consisting both of an input state to prepare AND a measurement on the output) that works well for any worst-case choice of input channels. This set of measurement strategies sounds potentially non-convex; however, Watrous’s SDP characterization of the diamond norm shows that measurement strategies form a convex set, so everything works out.
Next, this paper applies this to find ways to estimate the distance between a unital channel and the convex hull of unitary channels. To do this, they use a type of channel which they call a Schur channel (more commonly known as Schur multipliers): given a matrix $latex A$, define the channel $latex Phi_A(X) = A circ X$. Such channels are called Schur channels.
Thm:TFAE:

  1. $latex Phi_A$ is a Schur channel
  2. $latex Phi_A$ is unital with diagonal Kraus operators
  3. A is positive and $latex a_{k,k}=1$ for each $latex 1leq kleq d$.

Using Schur channels, and their discrimination lemma, they are able to make short work of the AQBC. Their next result is that any Schur channel for which conv(Kraus operators of the channel) doesn’t include any unitary is a counterexample to AQBC. This will follow from the fact that the closest random-unitary channel to a Schur channel can be taken (up to factor of 2 in the distance) to be a mixture of diagonal unitaries.
This work doesn’t tell us all of the AQBC-violating channels, but it does describe a large class of them.
They also mentioned connections to Grothendieck’s inequality (work in progress) and Connes’ embedding problem. Connections to these two topics were also mentioned in Haagerup and Musat’s work.

Markus Greiner, Quantum Magnetism with Ultracold Atoms – A Microscopic View of Artificial Quantum Matter.

Some motivation: many condensed matter models can’t be tested with current experiments. Even in simple models it is difficult to calculate basic quantities. Want to build new materials. The idea: use ultracold atoms in optical lattices to simulate condensed matter models. The dream of a quantum simulator is to build a special purpose device which is still useful before full-scale quantum computers are built.
He mentioned that quantum simulators are robust to errors… I believe that this is an open question which theorists should be working on.
How do we cool the atoms? First trap the atoms with lasers. For weakly interacting quantum gases, we can form a BEC where all of the particles can be well described by a single wavefunction. But for strongly correlated quantum gases there are interactions which preclude such a description. In an optical lattice, we have atoms interacting strongly on lattice sites. There is new physics here: for example, superfluid to Mott insulator transition. We can use the optical lattice to make synthetic matter: we can simulate electrons in a crystal. With fermionic atoms, we can observe a fermionic superfluid and things like the BEC-BCS crossover. Bose-Hubbard models and high-T_c superconductivity could also be probed in these systems. Also, quantum magnets, low-dimensional materials, etc.
Let’s focus on quantum magnetism. The main experimental tool is the quantum gas microscope. The goal is to take “bottom-up” tools like high-fidelity readout and local control from e.g. ion traps to more “top-down” systems like optical lattices. (This is clearly related to scalability of quantum computation, ultimately.) The quantum gas microscope can image florescence from single atoms trapped in the lattice. To image this, they measure parity.
He then showed a movie of atoms hopping around in a shallow lattice (in the classical regime, obviously). Very cool. Here is one still from the movie:

The Hamiltonian for this system is a hopping term and an interaction term which penalizes multi-site occupations: a basic Bose-Hubbard model. If the hopping terms dominate, then it’s a superfluid; if the on-site repulsion is dominant, we get a Mott insulator. They can get approximately unit site occupation number in the Mott insulator regime.
How to measure interesting quantities, like correlation functions? Measure the site occupation number (mod 2) on neighboring sites. One can push this further and measure string order parameters. In the future, the hope is to measure more complicated things like Chern number.
They observed the Lieb-Robinson light-cone of correlations.
Algorithmic cooling: Suppose that you have random site occupations. Then you can try to “cool” the system by pushing some of the entropy into a subsystem and get a pure state on the rest. They use Rydberg blockade interactions to try to smooth the site occupation number fluctuations, and then transfer to a superfluid. The temperature is not usually constant in their system, but the entropy is consistently low.
Quantum magnetism: Ising spin models. First consider a chain with on-site longitudinal and transverse fields. There are two phases: paramagnetic and antiferromagnetic. By putting a Mott insulator in an electric field, we can observe this type of transition. We can try to extend these results to two dimensional frustrated spin systems. For example, dimerized states (four-state clock model). These models have spin liquid-like groundstates.
Towards a universal quantum computer: The toolbox includes low entropy initialization, single site readout, single site control, and interactions, all with high fidelity. So what does the future hold?
Q: What types of error are these quantum simulators robust to?
A: Temperature is one type of error. Thermalization might not always occur. Sometimes you want some disorder, since real systems also have disorder.
Q: When you say “fidelity” what do you mean by that?
A: We mean the Hamming distance between classical basis states when we talk about paramagnetic order. That is, our fidelity is 99% if a fraction 99/100 of our spins are correctly aligned when we image the sample.
One question that I didn’t get to ask: what can they say about models which have degenerate ground states? I’m obviously thinking about e.g. the toric code and other topologically ordered models. Can they prepare specific pure states in the ground state manifold?

André Chailloux, Optimal Bounds for Quantum Bit Commitment. Joint work with Iordanis Kerenidis, based on 1102.1678.

Some basic primitives for secure transactions:

Unlike QKD, here there are only 2 players here who distrust each other.
Let Alice and Bob’s optimal cheating probabilities be $latex P_A^*$ and $latex P_B^*$.
The best known quantum protocol achieves
$latex max{P_A^*, P_B^*} =3/4$ [Ambainis ‘01]
and the best known lower bound is
$latex max{P_A^*, P_B^*} geq 1/sqrt{2}$ [Kitaev ‘03]
We improve the bound to $latex max{P_A^*, P_B^*} geq 0.739$ and build QBC protocols that get arbitrarily close to this bound.
The proof uses Mochon’s weak coin flipping protocol.

Gilles Brassard, Merkle Puzzles in a quantum world. Joint work with Peter Hoyer, Kassem Kalach, Marc Kaplan, Sophie Laplante, Louis Salvail, based on arXiv:1108.2316

Ralph Merkle was the inventor of Public Key Crypto in 1974 before Diffie and Hellman (but after the classified discovery). Gilles began with a great quote from Merkle:

The human mind treats a new idea the same way as the human body treats a new protein.

See Merkle’s web site for the amazing story of how he proposed public-key cryptography as a class project in grad school, and the professor described it as a “muddled terribly”, and then he submitted it to JACM, which rejected it because it was “outside of the mainstream of cryptographic thinking.”
Key Distribution problem as imagined by Merkle:
Alice and Bob talk over an authenticated public channel. After doing so Alice and Bob should get a shared secret. Can only be computationally secure, but back then it was not even thought to be computationally possible.
Make the eavesdropper’s effort grow as fast as possible compared to Alice and Bob’s effort
We will measure effort by query complexity to a random oracle. Use that model because it allows to prove a lower bound.
Merkle’s 1974 idea:
Choose a hard-to-invert, but injective, function f that acts on $latex [N^2]$.
Alice chooses N random values of x and computes the values of f(x).
Alice sends all these values to Bob.
Bob wants to invert one of them.
He keeps trying his own random x’s until he gets a collision; call it s.
Bob sends f(s) to Alice and Alice finds secret s.
So after N queries by Alice and N by Bob they get the same s
Eavesdropper has seen only N useless f(x) values and one useful f(s) which he has to try $latex N^2/2$ queries to find s, compared to N for Alice and Bob
Ok, so this is only a quadratic separation, but we don’t actually know that DIffie-Hellman is any better. The lower bound was proven by Narak and Mahmoody 08.
What about the quantum world, where the eavesdropper is quantum? Alice and Bob could be quantum but don’t need to be. The channel remains purely classical, so A&B can’t do QKD
If Eve is quantum, and Alice and Bob are classical, then Eve can find the key in O(N) time, which is the same amount of effort that Alice and Bob expend (albeit quantum and not classical). Unfortunately, this breaks the Merkle scheme completely.
It can it be repaired by allowing A&B to be quantum, or even keeping them classical
One option is to keep Alice classical, use a space of size $latex N^3$, and allow Bob to use BBHT to find one preimage in O(N) quantum queries. Eve using Grover needs $N^{3/2}$ so Alice and Bob get an advantage over Eve.
An improved protocol is for Bob to find two preimages in O(N) quantum queries.
Bob sends Alice the bitwise XOR of f(s) and f(s’). Given this Alice uses a table to find the secret.
This gives effort $latex O(N^{5/3})$ for Eve and O(N) for Alice and Bob. (It’s not trivial to show Eve can do this well, and even less to show this is optimal)
The proof involves something similar to Ambainis’ element distinctness algorithm with a quantum walk on the Johnson graph. A crucial observation relates to the compositon of our variant of element distinctness on N elements with searching each function value in a set of size $latex N^2$. This involves a new composition theorem for non-boolean functions.
What if Bob is classical?
Like before, there are two functions f and t, each with N^2 points.
Bob finds two preimages as in Merkle. The rest is the same: bitwise XOR.
How much time does quantum eavesdropper need? $latex N^{7/6}$.
Also, there is a sequence of protocols for which best attack we can find tends to $latex N^2$ in the limit. See paper for details. 🙂

Salman Beigi,
Simplified instantaneous non-local quantum computation with applications to position-based cryptography, joint work with Robert Koenig.

Position based crypto (ie authentication of position) by distance bounding techniques
Challenge response protocol
V1 ———-P—————–V2
Verifies send challenges timed for synch arrival at r0, position of verifier
honest prover computes and sends answers
Challenge/response susceptible to cheating by cheating provers each copying and sending the challenge. This shows why classical protocols are insecure.
Kent Munro Spiller 2011 with in- and security proofs
Cheating strategy uses teleportation
In general a quantum protocol is a bipartite unitary.
An attack is a combination of local operations, shared entanglement and one round of communication sufficient to simulate it.
Vaidman 03 achievse this with double exponential cost in ebits for recursive teleportation
Buhrman et al 09 key insights:
Simplified protocol uses $latex O(2^n)$ ebits
Uses port-based teleportation, due to Ishizaka and Hiroshima (PRL 2009 2009)
Share K maxly entangled states
Highly complicated measurement.
Recovery is very simple. Alice discards all but one of her halves of entangled states
New protocol for instantaneous measurementt
Position based crypto: cheating landscape
Vaidman -doubly exponential can always cheat
Port-based single exponential, suff for cheating
If restricted to classical communication and only linear entanglement can get exponential soundness for Position Based Crypto
If cheaters are allowed q commun and linear entanglement can get only constant

Florian Speelman, Garden Hose Game and Application to Position-Based Quantum Cryptography. Joint work with Harry Buhrman, Serge Fehr, Christian Schaffner, based on arXiv:1109.2563.

Position verification applications:

  • launching nuclear missiles (against enemies not equipped with FTL neutrinos, of course)
  • preventing prank calls of ambulances, SWAT teams or pizzas
  • communicating with right wifi router
  • computing international roaming fees properly
  • for when you want to know you are talking to S. Korea and not N. Korea
  • proving that “live”-blogging is actually occurring from the conference location, and is not just the result of typing out our previous opinions of the results.

On the positive side, can show security if promised there is no shared entanglement,
Show a specific class of schemes to obtain a tradeoff. Increased classical communication for honest players
Example: classically secure but broken by teleportation
Verifier 0 sends psi to Prover
Verifier 1 sends bit to prover
Prover routes psi left or right according to bit
Instead of one bit use a function
V0 sends state and n-bit string x to prover
V1 sends an n bit string y to prover
Prover computes fn f(x,y) and sends psi to V0 or V1 depending on outcome
Nice thing is that it’s no harder for honest prover
But for cheating provers, more entanglement is needed. Can do it with $latex 2times 2^n$ ebits
Garden hose model
A discrete way of looking at attacks of this sort
Alice and Bob share s pipes between them. Alice has a water tap
Alice use hoses like Enigma plug cords to connect their ends ends of hoses in some pattern
f(x,y) is whether water comes out on left or right
Gardenhose complexity of a function is number of pipes needed to compute the fn
Every piece of hose is a teleportation
Can use GH model to prove upper bounds
Equality 2n+O(log n)
Inner Product 4n
Majority n^2
If f is in logspace, then GH(f) is poly(n).
But there do exist fns with exponential GH complexity.
Can prove lower bounds for explicit functions, but best we can prove are linear.

Poster Session 1

The poster session had tons of great posters that we will not remotely do justice. Instead, here are some photos from a few of them. Try to guess which ones. 🙂


Quantum Information and Foundations at the APS March Meeting 2012

After John Preskill’s call for more quantum participation at the APS March Meeting, I couldn’t say no to a request for a blurb about quantum info and foundations! The following is a guest post by Giulio Chiribella.
Following up the previous post by John Preskill, I’d like draw your attention to the focus session “Quantum Information for Quantum Foundations” which will take place at the APS March Meeting 2012.
If you are interested in the conceptual and fundamental aspects of Quantum Information, this is the right session for you to come and present your work! The event promises to be lively and stimulating, and will be a great occasion to advertise your recent results.
On top of that, your participation will give an important support to foundational research.The foundational space at the March meeting is a great opportunity for our community.  But it is vital to keep this space alive, responding with a visible participation and presenting talks on the best of the foundational research in Quantum Information. This is not hard to do: Over the past few years there has been an enormous amount of progresses and a burst of new exciting results at the interface between Quantum Information and Foundations.  Also, we should not forget of the numerous results in Quantum Information that, even without being explicitly foundational, continue to shed a bright light on the operational features of Quantum Theory.   It is enough to have such a vibrant scientific landscape represented next March in Boston to make the foundational session memorable!
This year’s session will start with an invited talk by Valerio Scarani, who will summarize the main ideas and the latest developments on Information Causality. Valerio’s talk will be followed by a lineup of contributed talks, which hopefully would be as many and as lively as the talks of last year’s edition, organized by Chris Fuchs, which has been a very successful event.
To participate to the session, you can submit your abstract at the webpage http://www.aps.org/meetings/abstract/index.cfm. (don’t forget that the deadline for submissions is this Friday November 11th 2011!)
A last remark before concluding:  Chatting with colleagues sometimes I noticed that potential speakers are discouraged by the 12 minutes format, which seems too short to present all the relevant details. We should remind, however, that presenting details is not really the point here: The APS Meetings are huge events where the whole physics community meets to highlight advancements and to advertise new ideas across fields, not to address the specialists of one particular field.  The format of the March Meeting talks is designed to rapidly advertise new results, and if you discover that you would like to know more about one particular result…  well, during the meeting there is a lot of free time where you can interact directly (and more efficiently)  with the speaker about the details of her/his work.
So, let us take the event in the right spirit and make the foundational space at the March Meeting a real exciting forum for the exchange of new ideas!
Hope to see many of you in Boston!

GQI needs you! — John Preskill guest blogs

The following post was written by John Preskill.
I’m writing this guest post because I want all you quantum informationists to come to the American Physical Society 2012 March Meeting in Boston next February 27 through March 2. I’m telling you now, because the deadline for submitting your contributed abstract is next Friday. That’s 11/11/11, which is easy to remember. And lucky.
Why come?
Okay, maybe the March Meeting is not for everyone. Yes, last year there were over 8000 physicists at the meeting in Dallas, including over 3200 students. But that’s not everyone. No, not quite.
And those of us who came might not treasure every memory. Standing dutifully in line for what seems like hours, yet never quite reaching that much needed cup of coffee or sandwich. Missing at least 44 of 45 parallel sessions, while wondering what’s the harm in missing all 45? Surrendering to drowsiness after too many 10-minute talks in a row. Wondering who all these people are.
No, it’s not perfect. But the March Meeting is an exhilarating experience, and you really don’t want to miss it. Though you might want to consider bringing your own sandwich.
Quantum information has been well represented at the March Meeting since 2005, thanks to the efforts of the APS Topical Group on Quantum Information (GQI). Each year the membership of GQI has swelled and our participation in the March Meeting has ramped up, which allows GQI to have an even larger presence in the next meeting. Last year 311 GQI members attended and there were 324 contributed talks in the quantum information sessions. I hope we will beat those numbers substantially in 2012.
The March Meeting  provides a valuable opportunity for quantum information enthusiasts to exchange ideas with the broader physics community, and to convey the excitement of our field. Naturally, since the March Meeting is dominated by the condensed matter physicists, the interface of quantum information with condensed matter is especially emphasized, but contributions in all areas of quantum information science are welcome.
The 2012 program will be especially exciting. GQI will sponsor or co-sponsor six sessions of invited talks covering important recent developments:  topological quantum computing with Majorana fermions, quantum entanglement in many-body systems, quantum simulations, quantum computing with superconducting circuits, quantum information processing in diamond, and silicon spin qubits. In addition, there will be an invited session about career opportunities in undergraduate teaching for quantum information scientists.
If our turnout is below expectations, the GQI Program Chair gets the blame. And that’s me. So … don’t make me look bad — come to Boston and contribute a talk!
To register and submit an abstract go to:
http://www.aps.org/meetings/march/
You should flag your abstract with an appropriate quantum information Focus Topic or Sorting Category from the list available at the website, to ensure that your talk is scheduled in the proper session.
And if you are not a member of GQI, please join. Increasing our numbers means more visibility and influence for GQI within APS, and that too is good for the cause of quantum information science.
See you in Boston!

QEC '11 Registration Deadline Extension

The registration and submission deadline for QEC ’11 has been extended by a week and is now open until November 7th.
You can register here: http://qserver.usc.edu/qec11/reg.html
It looks like a great conference… I wonder if the Ghost Pontiff will show up to give his invited talk?

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 4

The fourth and final day of the conference. Update Oct 26: The slides from the talk will be posted online here.

Quantum Information in Quantum Many-Body Physics

Roger Melko, Entanglement entropy in exotic phases of quantum matter.

Review of quantum Monte Carlo (QMC). Sign problem. Want to compute Rényi entanglement entropies for frustrated spin systems. He states that the record for Lanczos diagonalization for just the ground state is… 48 spins. (Wow! Presumably this is exploiting some specific symmetries, but still, that’s impressive.) Worried about “weak” critical points where the correlation length is merely larger than the maximum simulable system size, but doesn’t actually diverge in the thermodynamic limit. Works on trying to distinguish these possibilities. In systems with long-range entanglement or topological order, the entanglement scaling can give a picture “Landau-like” order parameter. Can also give a picture of “non-Landau” (deconfined) quantum critical points. Subleading order correction to the area law contains universal information. We will compute Rényi entanglement entropy. Introduce the valence bond basis for QMC sampling (Sandvik PRL 2005). In particular, use projector QMC. The estimator that they use for estimating expectation values is weighted by the action of a projector on a trial (valence bond) state. In this way, many quantities can be estimated such as energies, order parameters, real-space correlation functions, etc. In order to measure entanglement, we can use the swap test. We can use these techniques to study many models: Heisenberg (Néel ordered ground state), resonating valence bond spin liquids, J-J’ models (conventional quantum critical points), J-Q models (unconventional, deconfined quantum critical points). Showed some results for Heisenberg model on a torus. Stochastic series expansion: expand the finite-temperature partition function by expanding the exponential in its Taylor series. You get powers of $latex -beta H$, which you can sample in various ways and using various update rules. (Several buzzwords that I didn’t understand: worm algorithms, directed loop algorithms?) To access Rényi entropies, we can use the replica trick by gluing together several Riemann sheets (depending on which order of the Rényi entropy is desired). Can also compute (Rényi) mutual information. Want to study spin liquids. One possible model, kagome lattice Bose-Hubbard model supports a $latex mathbb{Z}_2$ quantum spin liquid. This measure is a loop gas similar to the toric code. When computing (Rényi) topological entanglement entropy there is a finite-size “chord length” term that doesn’t exactly cancel. Showed some numerics for this model.

Frank Verstraete, The entanglement spectrum of strongly correlated quantum many-body systems.

Began with an overview of what questions people care about with regard to entanglement spectrum. Area laws, universal terms, dimensional reduction and holography. Says the following (I’m paraphrasing except for the quotes): Since I was reluctant to prepare my talk, I’ve decided, “with a stroke of genius” to turn this talk around and ask the audience what they would like to hear about so that I don’t disappoint anyone. He discusses the notion of PEPS as dimensional reduction: we argue that we can map 2D quantum systems to 2D classical systems (the PEPS), even though this is potentially difficult (complexity-theoretic obstacles in general) and we know that 2D classical systems can be mapped to 1D quantum systems. But we know how to deal with 1D quantum systems if locality is maintained throughout this series of mappings. These methods should work for gapped systems, but will breakdown for critical systems. A long review of why we care about area laws with some examples and numerics for gapped and critical systems. We can only get area laws iff there is very little weight in the tail of the Schmidt spectrum. But you also need to show that cutting small Schmidt coefficients in one cut that you don’t truncate large Schmidt coefficients across another cut. “Because there is an isometry that maps virtual degrees of freedom to physical ones in an MPS, this proves that the virtual degrees of freedom are real. This is the main message of this talk.” (?) Symmetries present in the state are inherited in the tensor network description, for example in the AKLT spin chain. Review of cMPS. Showed some numerical calculations of the entanglement spectrum in the Lieb-Liniger model using cMPS. Even though this method is known to be critical we see fast decay of the Schmidt coefficients.

Stephan Inglis, Measuring Rényi entropies in finite-temperature QMC using Wang-Landau sampling.

Review of definition of Rényi entropy and why we care about it. Review of classical MC with Metropolis update rule. The Wang-Landau (WL) algorithm calculates the density of states instead. Assume an initial DOS, generate a new state and visit with probability $latex p$, then update the DOS by multiplying by some function $latex f$ so that $latex g_{i+1}(E) = g_i(E) times f$. The probability is $latex min(g(E_i)/g(E_f),1 )$ where $latex g(E_i)$ is the DOS at the $latex i$th time step. WL works by calculating the DOS as a function of the energy. In stochastic series expansion (SSE) quantum Monte Carlo (QMC), the energy of the system is related to the number of operators sampled, so a more natural way to parameterize things is in terms of $latex beta$ and a power series expansion. WL gives a much simpler way of computing Rényi entropies because there is no integration needed. At finite temperature, we can use mutual information instead. Showed a few examples. Now a discussion of the noise in WL. Is there a self-consistent way to check the noise? Check the first derivative of the DOS. The noise in the derivative seems to be uncorrelated, so we can just average several simulations to converge to accurate results. The advantages of WL: only one simulation is required rather than many, there is no explicit cumulative integration error and results can be improved building on old data rather easily. The difficulties are that a single simulation takes longer and it is harder to measure observables that are not related to energy.

Jutho Haegeman, Time-dependent variational principle for quantum many-body systems.

Time dependent methods for quantum lattice systems include time-evolving block decimation (Vidal). There are several disadvantages, however, such as the method breaks symmetries, the state temporarily leaves the variational manifold, the truncation is suboptimal for infinite-sized systems, and it is manifestly discrete in time. Can we implement this with cMPS? We have to use some results from Dirac. We project onto the tangent plane of the variational manifold at each point in time.  If the time-dependent evolution never leaves the manifold, then you get a non-linear differential equation within the manifold. This method inherently respects all symmetries and there is no Trotter error. It is globally optimal (spatially… it is still local in time) and there is no truncation. We will function on uniform MPS. We need to compute overlaps of tangent vectors, and this introduces a metric. We can simplify this through gauge fixing. MPS are gauge invariant in the sense that if we conjugate the bond indicies we get the same physical state. Taking tangent vectors, we get an additive correction which depends only on the choice of gauge, so we can pick the most convenient choice. Describes (pictorially) some gauge fixing conditions which make things particularly nice. Can use this for both imaginary and real time evolution. The method also provides additional checks on the convergence of the method by checking the length of the tangent vector. Showed some examples of numerics. Near a steady state solution, you can linearize the differential equation to simplify things. This leads to the linear (tangent) subspace around the stable point (Rayleigh-Ritz equations). Can study excitations with momentum $latex k$ by combining the Östlund-Rommer ansatz and the Feynman-Bijl ansatz (single-mode approximation). More numerics. Could also study domain walls with momentum $latex k$ using the Mandelstam ansatz. More numerics.

Ann Kallin, Scaling of entanglement entropy in the 2D Heisenberg ground state.

QMC in the valence bond basis for measuring the Rényi 2-entropy. Spin-1/2 antiferromagnetic Heisenberg on a square lattice with only nearest-neighbor interactions. Use the power method to get something close to the ground state. Showed some numerics… the method converges well on up to a 20 by 20 lattice. They do QMC sampling with importance weighting. The power of the Hamiltonian acting on the trial state can be written as a sum over products and this is what they sample. Then they use an update rule to switch the valence bonds to do the sampling. Computing the overlap between two valence bond states is easy, and just amounts to counting the number of loops in the resulting diagram when you superimpose both valence bond patterns on the same lattice. This is the “regular” VB QMC. There is also a “loop” version of the algorithm. Now you have two trial states and sandwich your $latex m$ different operators from the Hamiltonian. This method is similar to the SSE method discussed earlier in the conference, but with different boundary conditions. Then some numerics with 100 sites on a 1D Heisenberg chain. The loop algorithm converges quite quickly as long as the size of the region isn’t too large (as a fraction of the total system size). One further improvement is to use a ratio trick, namely, measure ratios of entropies for different region sizes. In 2D, they wanted to look at the effect of corners in the region on the entropy. Chose square-shaped regions and stripes around the torus. Plot scaling of the 2-entropy as a function of the subsystem size. They fit to a function of the form $latex a L+b log L + c$. Showed some nice pictures of “Rényibows” which simultaneously plot the entropy over several system sizes and subsystem sizes. (The name comes from the colors used in the plot.) One of the things they observed was a 1D CFT-like chord-length correction in 2D systems. Also saw some effects due to corners.

Johannes Wilms, Mutual information in the Lipkin-Meshkov-Glick model.

We will work with mutual information because it is a quantity which describes mixed states, and we will consider finite temperature. Define the LMG model: every spin interacts with every other spin ($latex XX$ coupling) together with a transverse field. This model was originally studied in the context of atomic nuclei, and more generally for shells of finite Fermi systems and metal clusters, He-3 droplets, “magnetic molecules”, also mappings to BEC and cavity QED. More recently people have loked at ground state and thermal properties. Shows the phase diagram, temperature versus magnetic field strength. We can compute many things in this model by exploiting symmetry. For example, total angular momentum is a good quantum number. The Hamiltonian decomposes into polynomial-sized blocks which have exponential multiplicity. The best way to study the problem is to use Clebsch-Gordan coefficients. Showed a plot of concurrence in the phase diagram… it doesn’t work so well. On the other hand, the mutual information perfectly tracks the phase transition. They looked at the scaling of the mutual information at criticality. They find that it scales linearly with the log of the system size with a prefactor of 1/4 in the finite temperature case (as opposed to 1/3 which is what you expect at zero temperature).

Winton Brown, Quantum Hammersley-Clifford theorem.

The classical HCT is a standard representation theorem for positive classical Markov networks. Recently, quantum Markov networks have been used in approximation methods for lower bounds on the free energy of quantum many-body systems. Defined conditional mutual information and the strong subadditivity inequality. The Markov condition in the classical case is given by $latex I(A:C|B) = 0$, which implies that $latex p_{ABC} = p_{A|B} p_B p_{C|B}$ where $latex p_{A|B} = p_{AB}/p_B$. A Markov network is a generalization of a Markov chain to an arbitrary graph $latex G$. The Markov condition just says that whenever I partition the graph into three sets of vertices $latex A, B, C$ with $latex B$ separating $latex A,C$, then $latex I(A:C|B) = 0$. The classical HCT says that any positive probability distribution factorizes over the cliques of $latex G$. The proof is simple. Take the log of the probability distribution. From the conditional independence, we find that $latex p$ is a sum of two terms that act trivially on either $latex A$ or $latex C$. … couldn’t get the rest down in time. For quantum states, we just use the quantum conditional mutual information. What quantum states can be supported in this framework? We immediately find that $latex rho = rho_{AB} rho_{BC}$ where the two factors commute. Then repeating the proof from the classical case runs into trouble because we don’t know if the various terms commute. Consider a graph which has no 3-cliques. On these graphs we have enough structure that each two-body operator has to be zero. This is almost enough, but some 1-body operators might remain. But it turns out this is not an obstacle. Unfortunately, there is no simple general result. A counterexample exists with a five-vertex graph with a single fully connected central node and a square of four nodes surrounding it. Now consider a connection to PEPS. We can construct a PEPS by applying a linear map to each site of virtual spins. If the linear map is unitary, then the PEPS built in this way are quantum Markov networks. Under what condition can we show the reverse implication? For a non-degenerate eigenstate of a quantum Markov network, the Markov properties imply an entanglement area law. Thus, an HC decomposition implies a PEPS representation of some fixed bond dimension. Open problem: show under what conditions quantum Markov networks which are pure states are have an HC decomposition. There exist non-factorizable pure state quantum Markov networks (simple example).

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 3

Day 3 of the conference.

Quantum Information in Quantum Many-Body Physics

Guifre Vidal, Criticality, impurities and real-space renormalization, joint work with Glen Evenbly.

First an overview of the multi-scale entanglement renormalization ansatz (MERA). Basic notions: isometries, disentanglers, causal cone. Bounded causal cone width means that it is possible to efficiently evaluate the expectation value of local observables. One can also course grain a local operator by applying one level of the MERA. (For ternary MERA, a two-body term gets mapped to a two-body term.) We will focus on critical systems, and then put in impurities. To enforce scale and translation invariance, we characterize the MERA by just a single pair of tensors $latex U$ and $latex W$ repeated across the whole network. MERA seems to be a good ansatz to describe critical systems where such invariance is manifest. MERA also can exhibit polynomial decay of correlations. Define the scaling superoperator. The eigenvalues and eigenvectors of this superoperator are the scaling operators and (the exponential of) the scaling dimensions of the underlying CFT. How does one choose the optimal tensors given a local Hamiltonian? He dodges the question… doesn’t want to get into such details in this talk. Showed an example for the 1D critical Ising model with transverse field. Now let’s consider an impurity at the origin. Now we lose translation invariance, but we still have scale invariance (conformal defect). The bulk scaling operators in the absence of an impurity has zero expectation. However, in the presence of an impurity the one-point functions decay like a power of the distance to the impurity. We need to add new scaling operators and new scaling dimensions. Translation invariance is broken, we can’t possibly optimize over an extensive number of tensors (because we could never reach the thermodynamic limit). Well, desperate times call for desperate measures. Introduce something called “Principle of minimal influence” in RG flow. PMI says: Given a MERA for the initial (pure) problem, we can obtain a MERA for the problem with impurity by then varying the tensors in the causal cone for the region containing the impurity. We don’t expect the principle to work in all cases, such as in systems with spontaneous symmetry breaking where the impurity restores the symmetry. We are still exploring the limits of applicability of this principle. If you split the isometries into two “half isometries”, you get that the total number of tensors you need to optimize is still $latex O(1)$. Why is this ansatz plausible? For one thing, we can show that the scaling superoperator now exhibits power low decay with respect to the distance from the impurity. The optimization goes as follows. First optimize with respect to the bulk Hamiltonian (no impurity). Next, use the bulk tensors to map the original impurity problem to an effective impurity problem. This gives a new Hamiltonian problem with exponentially decaying interaction strengths. Finally, we can reduce the optimization via “folding in half” to that of an MPS for an open boundary problem. This same form appears in Wilson’s solution of the Kondo problem. However, the impurity MERA can be used beyond the setting of free fermions, unlike Wilson’s original approach. Showed example of transverse-field Ising model with $latex XX$ impurity. This model allows an exact solution via CFT, and the numerics agree with the exact solution for various impurity strengths. We can extend this to more than one impurity by treating each impurity individually in their non-intersecting causal cones, and then once the causal cones intersect we do one more variation. We can also have an impurity that “cuts” the chain or a boundary between two half-chains coupled by the impurity, and even boundary conditions.

André-Marie Tremblay, Success and limitations of dynamical mean-field theory, joint work with Sordi, Sénéchal, Haule, Okamoto, Kyong and Civelli.

First recall how one “builds” a metal. We take a lattice and put orbitals on the sites. If the orbitals overlap, then it takes no energy to excite an electron. However, this doesn’t always work, e.g. in NiO. Recall the Hubbard model with on-site $latex U$ and hopping strength $latex t$ (and $latex t’$ for nearest diagonal neighbors, etc.) When $latex U=0$, we can Fourier transform and solve the model. When $latex t=0$, everything is localized, and there are $latex 2^n$ degenerate ground states at half filling corresponding to the different electron configurations with one electron per site. We can observe antiferromagnetism in the Hubbard model. We see a Mott transition in the region where $latex t$ and $latex U$ are of the same order, and at this point in the phase diagram neither the plane wave nor the localized wavefunctions gives a good description. One can write an effective model with Heisenberg coupling. We would like to measure Green’s functions for a single particle. Why do we care about Green’s functions? We can use them to compute the value of other observables and can measure it directly via photo emission. Fermi liquid theory: means that it is not too far from a free electron problem. We want to be able to solve in the presence of an impurity. Let’s review dynamical mean-field theory (DMFT). We begin working in infinite dimensions. Then the self-energy is a function only of the frequency. We want to compute the self-energy of the impurity problem. Then use that self-energy (which depends only on frequency) for the whole lattice. Now project the lattice onto a single site and adjust the bath so that the single site density of states obtained both ways are equal. There are some self-consistency equations. We can use a number of methods: cavity method, local nature of perturbation theory in infinite dimensions, expand around the atomic limit, effective medium theory, Potthoff self-energy functional. Use the universal self-energy functional method (see the talk by Sénéchal). Vary with respect to the parameters of the cluster (including Weiss fields). In this context, we can view DMFT as a stationary point in the variational space. What about the impurity solvers? The most trivial one is just exact diagonalization. Could also do Monte Carlo using an ergodic Markov chain. Showed a bunch of plots of these numerical methods for various models. The strengths of DMFT are: can obtain one-particle properties and phase diagrams; weaknesses include, order parameters are not coupled to observables that are quadratic in creation and annihilation operators, transport (four-point correlation functions), analytic continuation for QMC solvers. Future challenge is to find the optimum environment.

Matthias Troyer, Galois conjugates of topological phases, arXiv:1106.3267.

The question is: should we explore the computational power of non-unitary topological phases? Such phases appear through Galois conjugation of unitary topological phases. The arising models are non-Hermitian, but still have a purely real spectrum. Do they make sense? First, a review of topological phases and anyons (both Abelian and non-Abelian). Review of $latex mathrm{SU}(2)_k$ fusion rules. These states might appear in the Pfaffian state (Moore & Read) which has Ising anyons (level $latex k=2$) or in the parafermion state (Read & Rezayi) which has Fibonacci anyons (level $latex k=3$). How can we make more exotic states? We start with an anyonic liquid on a high-genus surface. Picture two sheets which are connected by fusing a bunch of perforations through both sheets. These create doubled (non-chiral) models. We can consider two types of quantities: flux through a hole and flux through a tube. We can build a family of basis states to describe the anyon wavefunctions by building a “skeleton” through each of the tubes. The basis states are labeled by string nets of particle labels on the skeleton lattice. Now we add the Hamiltonian for the model. We can either add flux through a tube or a hole, and each of these costs some different amount of energy. Each of these terms acts on either plaquettes or edges of the skeleton lattice. The plaquette term gives a ground state without flux through the plaquettes. This is like closing each of the holes: we get vacuum states of anyons on each of the two sheets. This is just the toric code or the Levin-Wen model. It’s a topological phase which is robust at zero-temperature to weak perturbations. This can be solved exactly, even for the excited states. Violations of the vertex constraint are chiral anyonic excitations living on the surface. Consider a finite density of anyons, and let’s pin the anyons into fixed positions. We would like to add a Heisenberg-like interaction Hamiltonian to this. We just introduce an energy difference between the various labels of the particle types. Review of the fusion basis for anyons, specialized to the Fibonacci case. Review of the F-moves ($latex 6j$ symbols, Clebsch-Gordan), the pentagon and hexagon equations. So what does a Heisenberg chain for anyons look like? $latex H = sum_i F_i Pi_i^0 F_i$ where $latex F_i$ is the F-move that exchanges two nearest-neighbors in a chain and $latex Pi_i^0$ is a projector onto a maximally entangled state. Ran numerical simulations to compute the gap… but it was gapless! The gap decreased like 1/L, so we guess it is a critical model. We can compute the central charge using the connection with entanglement entropy. It turns out that the model is exactly solvable by using the Temperley-Lieb algebra. They studied these interacting chains for various values of the level $latex k$. There are lots of equivalences known to CFTs depending on the central charge and the sign of the coupling (antiferromagnatic or ferromagnatic). Plot of the energy spectrum, with very good agreement between exact solution and numerics. When there are finite density interactions, there are gapless edge modes and a nucleated liquid. This foreshadows the situation in 2D. You get a nucleated liquid with a finite bulk gap and edge states. A novel quantum Hall liquid is nucleated inside the parent liquid. Now we move to the Galois conjugation. Review of Galois theory. Different roots of a polynomial are Galois conjugates of each other. The algebraic properties are the same, but the geometric properties are different. As an example, consider again the Fibonacci case. The elements of the F tensor element $latex F_1^{111}$ can be expressed in terms of the golden ratio $latex phi$, or in terms of $latex -1/phi$. However, this second one (Yang-Lee theory) is non-unitary. But if you look at the spectrum of the antiferromagnetic Yang-Lee model, you get a completely real spectrum! The central charge is negative (-3/5). The vacuum state is not the lowest energy state in this model. Similar conclusions for the ferromagnetic case. How does the entanglement entropy scale? It can’t still obey $latex c/3 log(L)$ since the central charge is negative, right? Let’s look at the doubled Yang-Lee model. We can redefine the inner product to get a self-adjoint Hamiltonian. (Bender et al., PT-symmetric quantum mechanics). We can solve this doubled YL model exactly, just like with the Levin-Wen models before. Now the code property that defines topological order has a “left-right” eigenvector version, since these are now distinct notions. There are several ideas one might have to turn it into a Hermitian model: Square the plaquette terms, or use just the projector onto the left or right eigenspaces. But these approaches don’t work at giving a topological phase which is robust… they are sensitive to local perturbations. “Hermitianizing” destroys the left-right code property. Might there by some local transformation to a Hermitian model? If not for Fibonacci, then maybe for some other non-unitary TQFT? Norbert: can you use a tensor network description of the ground state to make a Hermitian parent Hamiltonian? Which ground state do you choose? The left or the right? Main result: Galois conjugates of TQFTs where the braid group acts densely in $latex mathrm{SU}(2)$ cannot be realized as ground states of quasilocal Hermitian Hamiltonians. Non-Abelian topological phases offer universal quantum computation; the minimal models of CFT are Heisenberg models of non-Abelian anyons; Galois conjugation of topological phases gives intriguing non-unitary theories and non-Hermitian models; but non-unitary phases will not appear in physical systems. Tobias: can’t we add a single extra spin which couples to all the spins and make the model Hermitian?

Renato Renner, Aisenstadt Lecture, An information-theoretic view of thermalization.

Consider a very trivial system of a classical binary random variable $latex C$ (for classical) which takes the values 0 or 1 with equal probability. Now let’s introduce an observer $latex A$ who has some knowledge about the $latex C$. Suppose that $latex A=C$ with probability $latex 1-epsilon$ and $latex A=1-C$ with probability $latex epsilon$. Now when we condition the probability of $latex C$ on the state of $latex A$ we get a new distribution. A Bayesian would just call this an update. Suppose now that a new observer $latex Q$ has quantum information about the system. So now the conditional state of $latex Q$ is a quantum state. If we try to condition the state of $latex C$ on the quantum information $latex Q$, this becomes more interesting. For example, in quantum cryptography there is no reason to assume that the adversary is classical so it makes sense to consider this. Now let’s move on to thermodynamics. Consider a Szilard engine, i.e. a box with one particle in it and a piston that can compress to the middle of the box. If we let the “gas” expand adiabatically or compress it, which gives or takes work, then we find that the work satisfies $latex W = k T log 2$ to transition between the two states. From information theory, we know that a two-state system stores information. To erase information costs $latex k T log 2$ units of energy. Landauer showed that in fact this is optimal (Landauer’s principle), and this is independent of the physical representation of the bit. The argument is, if you could erase with a lower energy cost, then you could use the Szilard engine to extract work for free from heat (which violates the second law). If we have conditional knowledge about the state of the Szilard engine, then we can extract a different amount of work, namely $latex W = k T H(rho) log 2$ where $latex H$ is the binary entropy and $latex rho$ describes our (potentially quantum) knowledge about the state of the system. (Technical note: this is the von Neumann entropy in the asymptotic setting and the max entropy in the single-shot setting.) Consider a computer with some registers $latex R_1$ and $latex R_2$. I would like to erase them. If I erase them in sequence, then the total work required to do this would just sum the individual works using the previous formula. If instead, we did some joint operation we would only pay $latex propto H(rho_{12})$, which is in general less than the sum of the individual entropies. That is, in general we have $latex H(rho_{12}) le H(rho_1) + H(rho_2)$, so it makes sense to take advantage of the joint information. We could use instead a formula in terms of conditional entropies by just changing the formula in the obvious way. It turns out that this formula is the correct formula, $latex W = kTlog 2 H(S|mathrm{obs})$, where $latex S$ is the system and $latex mathrm{obs}$ represents the observer. If we take advantage of this conditional information, then we can serially achieve the same work as the optimal joint protocol. This is a manifestation of the so-called chain rule for entropy: $latex H(A|B) + H(B) = H(AB)$. Now let’s discuss thermalization. There is some large system (including an environment). If the total system is in a pure quantum state, how can it thermalize? What does that mean? We can look only at the subsystem, and there is a reduced density operator. Recall Alioscia’s talk for an example of some conditions for when the subsystem thermalizes. However, if two separate subsystems are separately thermal, that does not imply that the joint system is thermal. If instead we condition on the other subsystem instead, we could indeed reach this conclusion. As before, we cannot simply condition on quantum information, but the conditional entropies are well-defined. We can introduce the decoupling lemma, which guarantees that the joint state is nearly separable. It is a sufficient condition, phrased in terms of entropies conditioned on the observer, for decoupling. Namely, decoupling holds if we typically have $latex H(S|mathrm{obs}) ge 2k – n$ where the total system has $latex 2^n$ degrees of freedom and the system has $latex 2^k$. This is tight. If we apply this condition to thermalization. If the subsystem is sufficiently small, then the system will thermalize.

Tobias Osborne, The continuum limit of a quantum circuit: variational classes for quantum fields, arXiv:1102.5524 and arXiv:1005.1268, joint work with Cirac, Eisert, Haegeman, Verschelde, Verstraete.

Review of quantum fields. The most several ways one might “solve” a QFT: perturbation theory, Monte Carlo, and the variational methods. There were problems using the variational method in QFT, namely sensitivity to UV and poor reproduction of correlations. Review of MPS. Recall that MPS can be build from a staircase circuit. Review of area law result due to Hastings. As Tobias himself has proved, log-time evolution can be simulated efficiently, too. Also, a review of MERA. We can view it as a quantum circuit which takes ancillas and prepares a state. How can we pass to the continuum for these variational classes? We can use the quantum circuit preparation prescriptions of MPS and MERA and change the time step size for each gate to an infinitesimal quantity. That’s the main idea. Some nice features of cMPS: it is possible to analytically compute all the correlation functions. There is a built-in UV cutoff. There is generic clustering of correlations. Can build cMERA by alternating scaling transformations and local interactions for infinitesimal time steps, and take the limit. There is one main difference between MERA and cMERA: flexibility in the UV cutoff. The cMERA allow a smooth cutoff, but regular MERA only offer a lattice cutoff. These classes also obey entropy/area laws, rigorously in the case of cMPS, and there is a heuristic argument for cMERA. Gave an example for a simple bosonic system where the expressions are analytic for cMERA.

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 2

Day 2 of the conference. By the way, it is pretty difficult to get all this stuff written down correctly the first time through! So, if you see any mistakes please leave a comment so I can fix them.

Quantum Information in Quantum Many-Body Physics

Barbara Terhal, Adiabatic Quantum Computation and Stoquastic Hamiltonians, arXiv:0806.1746, joint work with Sergey Bravyi and quant-ph/0606140, joint with Bravyi, DiVincenzo and Oliveira.

Some quantum Hamiltonians have ground states which are amenable to Monte Carlo techniques because of the absence of the sign problem. We say a Hamiltonian is stoquastic iff, given some product basis, the off diagonal elements of each local term are real and non-positive. (One can apply local unitary operations to potentially transform some Hamiltonian into a stoquastic one.) Examples of stoquastic Hamiltonians: ferromagnets, quantum transverse-field Ising model, Jaynes-Cummings, Spin-boson and, importantly, Josephson-junction flux qubit Hamiltonians such as those engineered by D-wave for adiabatic quantum computation (AQC). Given a stoquastic Hamiltonian, we can define $latex G=I-tH$ for some sufficiently small $latex t$ and $latex G$ is a non-negative matrix. By the Perron-Frobenius theorem the largest eigenvalue eigenvector is a probability distribution. Therefore, we can potentially use quantum Monte Carlo (QMC) to sample from it. Theorem 1: Stoquastic frustration-free AQC can be efficiently simulated classically. Recall the matrix $latex G$ from before. We can sample the matrix elements of $latex G$ efficiently because the Hamiltonian is local. Since the Hamiltonian is frustration-free (FF), the ground state is annihilated term-by-term. We would like to set up a Markov chain which allows us to sample from the ground state. We can define a stochastic matrix from $latex G$ and the weights in the ground state $latex alpha_j$, namely $latex P_{jto i} = (D G D^{-1})_{ij} = frac{alpha_i}{alpha_j} G_{ij}$ where $latex D$ is the diagonal matrix with the ground state weights $latex alpha_i$ on the diagonal. This is a stochastic matrix. Because the Hamiltonian is FF, we can compute the ratio $latex frac{alpha_i}{alpha_j}$. This doesn’t seem to work in the non-FF case. Now if you iterate and there is a gap you will converge in polynomial time. In the frustrated case, we don’t know how to compute the ratio anymore. An alternative approach: we know the ground state is well-approximated by the Gibbs state at low temperature. The Gibbs matrix is non-negative if the Hamiltonian is stoquastic (but frustrated). The partition function is a sum over non-negative weights now. We can compute each of these weights efficiently, so now we want to estimate the sum with QMC. “World-line QMC.” You can use some kind of Metropolis rule for accepting and rejecting moves.

Spiros Michalakis, Stability of Frustration-free Hamiltonians, arXiv:1109.1588. Joint work with J. Pytel.

The challenge is to find a minimal set of assumptions under which gapped Hamiltonians have a stable spectral gap against weak local perturbations. Not every gapped system is stable. Consider a simple Ising model where the transverse field is a perturbation. This perturbation splits the degeneracy of the ground state. Another example is an Ising model with a defect at the origin. The lesson seems to be that local distinguishability leads to instability of the ground state degeneracy and/or the gap. Now define FF Hamiltonians (FFH). Every FFH on a lattice $latex Lambda$ is the extension of another FFH on some smaller ball $latex Bsubset Lambda$. The ground state projectors satisfy $latex P_B P_0 = P_0$ where $latex P_0$ is the global ground state projector. Now define topological quantum order (TQO). Roughly, no sufficiently local operator should be able to distinguish two orthogonal ground states. The toric code is an example of a model with TQO. But TQO is not enough for stability, as a simple counterexample shows (the “Ising toric code with defect”– it’s simpler than it sounds). Let’s define local TQO (LTQO), which says that the local distinguishability between various ground states as measured by the operator norm decays rapidly. This LTQO condition implies an area law (with a possible logarithmic correction, which is what we expect since we haven’t said anything about a spectral gap at this point). Next, we consider another condition called the local gap condition (LG). We want the gap of the local (restricted to some $latex B$) Hamiltonians have a gap which decays at most as an inverse polynomial in the size of $latex B$. Open problem: is LG always satisfied if the global Hamiltonian is a sum of local projections with FF and a spectral gap? The main theorem: If $latex H$ is a FFH satisfying LTQO and LG with sufficiently weak quasi-local perturbations, then the gap is stable and the ground state degeneracy doesn’t split too much. Gave a very brief overview of the proof. Some open problems: how can we prove spectral gaps for 2D FFH? What symmetries of the Hamiltonian imply LTQO? Can we prove LG for all FFH that are sums of local projectors?

Norbert Schuch, Complexity of commuting Hamiltonians for qubits on a square lattice, arXiv:1105.2843.

Commuting Hamiltonians can have highly entangled ground states (e.g. the toric code). They clearly have a constant spectral gap, and we only need to estimate the ground state energy to constant accuracy. Is the complexity of estimating the ground state energy easier than the full quantum problem? Main result: for qubits on a 2D square lattice, the problem of determining if the ground state is FF is in NP. First we need a structure theorem for commuting Hamiltonians due to Bravyi and Vyalyi (2003). This decomposition was used to show that 2-body commuting Hamiltonian problem is in NP. The decomposition does not generally extend to $latex k$-body for $latex kge 3$ and for qudits. However, if the connectivity is “sparse enough” then the decomposition works. Idea number one: split the lattice into “white” and “black” squares and use the fact that each layer individually can take advantage of the BV decomposition. But we don’t know if these two decompositions are compatible, so we need a new idea. We can tell if the FF ground state exists if we can find states in each layer that have non-trivial overlap… but how can we compute the overlap? We need to take advantage of the structure. The point is that for qubits the overlap can be efficiently computed. When the spins are qubits are only two possibilities for the BV decomposition. One possibility is trivial, whereas the second possibility doesn’t allow any branching structures to form, so the overlap can be computed efficiently. This proof does not provide a constructive procedure for finding the ground state.

Maarten Van den Nest, A monomial matrix formalism to describe quantum many-body states, arXiv:1108.0531.

The motivation is to generalize the Pauli stabilizer formalism in a way which still retains some of the nice properties and captures a large class of states. Some of the nice properties include: efficient description of quantum states via the stabilizers, understanding of properties of the state via manipulation of the stabilizers. We first define a monomial unitary matrix which is a permutation matrix times a diagonal matrix of phases. An M-state is one which is stabilized by (+1 eigenstate of) a set of monomial unitaries. We typically want to restrict to special cases, such as $latex k$-local, etc. This class of states encompasses e.g. stabilizer states for qudits, all quantum double ground states, AKLT model, W-states, Dicke states, coherent probabilistic computations, coset states of Abelian groups, and so on. We define the monomial stabilizer group $latex mathcal{G}$ to be the group generated by the unitaries. Assume this is finite. There is another group called the permutation group $latex mathcal{P}$ which comes from “forgetting” the phase structure and keeping only the permutation piece. We define the orbits by the orbit of the computational basis states under the action of the operators in $latex mathcal{P}$. Some simple theorems: Thm 1, all amplitudes of an M-state are zero outside the orbit. Thm 2, all nonzero amplitudes have equal modulus. If you have an entire M-space, then you can find a basis for the space where each basis state is a uniform-amplitude superposition over a single orbit. The orbits are disjoint, so the dimension of the space is bounded by the total number of orbits. Computing some properties such as sampling the state in the computational basis are NP-hard. However, there are many examples that allow an efficient classical simulation.

Hector Bombin, Structure of 2D topological stabilizer codes, arXiv:1107.2707, joint work with D. Poulin and G. Duclos-Cianci.

Topological order can be thought of as equivalence classes of many-body systems under the action of constant-depth local unitary quantum circuits. We say that a state has short range entanglement if such a circuit can map the state to a product state. Other states are said to have long range entanglement. (Wen, Gu, Chen 2009) We will consider topological stabilizer codes (TSC). These are a subset of the stabilizer code Hamiltonians, in which the local terms are stabilizer code generators. The goal of this talk is two-fold: show rigorously that local equivalence “works” as a way of classifying gapped phases and find the general structure of 2D TSC including subsystem codes. This allows us to extend which are well-known for specific models (such as RG error correction for the toric code) to other models. Let’s recall what defines a theory of anyons: topological charges (particle labels), fusion rules and braiding rules. In 2D, string operators play an important role in these models. In fact, we can recover the data specifying the anyon theory from the action of the string operators. In this talk, we are only considering qubit Hamiltonians. We only consider Hamiltonians on an infinite lattice which are translationally invariant. Then TSC are ones where the centralizer of the stabilizer (in the group of Paulis with finite support) is again the stabilizer. TSC with the same anyon model are equivalent. Since we are going to show that everything is equivalent to some number of copies of the toric code, we can get a complete invariant by considering only the topological entanglement entropy. A sketch of the proof:

  1. Topological stabilizer groups admit local, translationally invariant independent generators (only possible in 2D)
  2. there are only a finite number of global constraints
  3. the number of global constraints equals the number of charges in the anyon theory
  4. the charges are topological, so the have string operators, which means we can extract an anyon model
  5. we can rule out chirality because the Hamiltonian is commuting (there can be no energy transport along the edge, as can be seen by considering evolution in the Heisenberg picture)
  6. build a framework for string segments as in multiple copies of the toric code
  7. other stabilizers (non-plaquette) have no charge
  8. map string segments to string segments and uncharged stabilizers to single qubit stabilizers
  9. ???

(Hector, if you’re reading this, fill in #9 in the comments. You went waaaay too fast!) Now let’s move on to subsystem codes. There are some complications since there is now also a gauge group as well as a stabilizer group. Gauge charges map naturally to stabilizer charges. Although they are not generally isomorphic, these two groups have the same order. In fact, these two charge groups are naturally dual. When all is said and done, every possible anyon model on qubits is a combination of: the toric code (2 bosons, 1 fermion, semionic statistics); TSCC (3 fermions, semionic interactions, chiral); subsystem toric code (1 boson); honeycomb subsystem code (1 fermion).

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 1

For the rest of the week I’m at the very interesting conference in Montreal on quantum information and quantum many-body physics. I’ll be live blogging the event. Enjoy!

Quantum Information in Quantum Many-Body Physics

Alioscia Hamma, Entanglement in Physical States, arXiv:1109.4391. Joint work with P. Zanardi and S. Santhra.

Some motivation: studying the foundations of statistical mechanics using quantum information. Want to investigate the role of entanglement in stat mech. Goal of this talk is to revise some of the recent work by Winter et al. which studies stat mech using Haar-random states and instead consider some subset of just the “physical” states, e.g. states which are subject to a local Hamiltonian evolution. Thermalization in a closed quantum system is a vexing philosophical problem. Let’s focus on entanglement; now if we look locally at the system we see a mixed state even if the global state is pure. Can we explain thermodynamics via this entanglement entropy? For a Haar-random pure state, the reduced density operator is almost maximally mixed. Now let’s define a global Hamiltonian with a weak coupling to a bath. Then the reduced state is close to the Gibbs state. However, just by a simple counting argument (Poulin, Quarry, Somma, Verstraete 2011) most physical states (i.e. ones subject to evolution with respect to a local Hamiltonian) are far from typical Haar- random states. What Alioscia et al. want to do is start from completely separable pure states and evolve with respect to a local Hamiltonian for a polynomial amount of time. Now study the entanglement in such a system. We start with a probability measure [latex]p[/latex] on the set of subsets of qudits. Now sample a subset and evolve for an infinitesimal amount of time. Now take a new iid sample and evolve again. Look at the reduced density operator and compute the purity, [latex]mathrm{Tr}(rho^2)[/latex]. Now let’s compute the mean and variance of the purity. To do this, use a superoperator formalism. Use a so-called random edge model. If you put the system on a lattice, you find that the average purity decays with an area law scaling. Next, consider a linear chain. A chain of length L cut into two pieces A and B. Instead of choosing edges from a random edge model, use a random local unitary on each site chosen in a random order (it turns out this order doesn’t matter much). Now repeatedly apply these random local unitaries to each site on the chain. The purity decays exponentially in the number of time steps. The average Rényi 2-entropy is nearly [latex]k log d – log 2[/latex] where [latex]k[/latex] is the number time steps and [latex]d[/latex] is the local site dimension. If we look at the [latex]ktoinfty[/latex] limit we get the maximally mixed state. They were not able to compute the gap of this superoperator yet, so we don’t know the rate of convergence. When the number of time steps is constant, we have an area law. Working on 2D and 3D lattices now. An open questions is how to take an energy constraint into account so that one might be able to recover a Gibbs state. Question period: David Poulin suggests keeping the unitary according to a Metropolis-Hastings rule.

Sergey Bravyi, Memory Time of 3D spin Hamiltonians with topological order, Phys. Rev. Lett. 107 150504 (2011); arXiv:1105.4159, joint work with Jeongwan Haah.

Want to store quantum information in a quantum many-body system for a long time. In a self-correcting memory, we want the natural physics of the system to do error correction for us. An example of a classical self-correcting memory: 2D ferromagnetic Ising model. There is a macroscopic energy barrier which prevents transitions between the ground states. It’s no good as a quantum memory because different ground states can be distinguished locally. (We need topological order.) Begin with an encoding of the information, i.e. an initialization of the system. To model the evolution, we use Metropolis dynamics. If a spin flip would decrease the energy, then flip it; otherwise, flip it with probability [latex]exp(-beta Delta E) [/latex]. If we are below the critical temperature, then the lifetime of the information is long, otherwise it is exponentially short. Now go to the quantum case. We will consider 2D and 3D quantum codes. We consider only those with Hamiltonians where the local operators are supported on a unit cell (locality). All the stabilizers must commute and must be a tensor product of Pauli operators. The ground state is frustration-free. Initial state of the quantum memory is some ground state. Now we consider dynamics. Restrict to Markovian dynamics, i.e. a master equation in Lindblad form. We will restrict the jump operators to be local with bounded operator norm, and the total number of jumps should be only of order [latex]n[/latex] (the number of spins). We require that the fixed point of the master equation is the Gibbs state and also detailed balance. Define the Liouville inner product [latex]langle X , Y rangle_beta = mathrm{Tr}( exp(-beta H) X^dagger Y )[/latex]. Detailed balance amounts to [latex]langle X , mathcal{L}(Y) rangle_beta = langle mathcal{L}(X) , Y rangle_beta[/latex]. As for decoding, we measure the syndrome and correct the errors. We measure the eigenvalue of each of the stabilizer operators. Whenever we record a -1 eigenvalue, we call that a defect. The decoder’s goal is trying to annihilate the defects in the way which is most likely to return the memory to its original state. Let’s ignore the computational complexity of the decoding algorithm for the moment. Then a good decoder is a minimum energy decoder, a la Dennis, Kitaev, Landahl, Preskill. (Why not consider free-energy minimizer instead?) Rather than use this decoder, we use the RG decoder of Harrington (PhD thesis) and Duclos-Cianci and Poulin (2009). To do this, identify connected clusters of errors and find the minimum enclosing box which encloses the errors. Then try to erase the cluster by finding minimum energy error within the boxes. Some of the boxes will be corrected, some will not. Now coarse-grain by a factor of two and try again. Stop when there are no defects left. Now apply the product of all the recorded erasing operators. If it returns the system to the ground state, declare success. Declare failure if the size increases to the linear size of the lattice and there are still some defects left. Now let’s define the energy barrier so that a high energy barrier gives high storage fidelity. A stabilizer Hamiltonian has energy barrier [latex]B[/latex] iff any error path of local operators mapping a ground state to an orthogonal ground state has energy cost at least [latex]B[/latex]. Lemma 1, the trace norm distance between the output of the minimum energy decoder and the original ground state is upper bounded by [latex] lVert Phi_{mathrm{ec}}(rho(t)) – rho(0)rVert le O(tN) sum_{mge B} {N choose m} exp(-beta m) ,.[/latex] How large does the energy barrier need to be to get self-correcting properties? Clearly a constant [latex]B[/latex] is insufficient. What about [latex]B sim log N[/latex]? Plug this into the pervious bound and try to pick the system size (for a fixed temperature) that minimizes the RHS. Choose the system size to scale like [latex]N_beta sim exp{beta/2}[/latex] and you get that the storage infidelity decreases like [latex]exp{beta^2}[/latex]. This is a “marginal” self-correcting memory. There is a size of the system which is “too big” beyond which you lose the self-correction properties, but for the right size you get good properties. Contrast this with systems that have logical string-like operators: applying a string-like operator to the vacuum creates a pair of defects at the end points of a string. Then the energy barrier is [latex]O(1)[/latex] and the system can’t be self correcting. The memory time is [latex]O(exp(beta))[/latex]. All 2D stabilizer Hamiltonians have these string-like logical operators. In 3D, all previous attempts had string- like excitations as well. Recent results by Haah arXiv:1101.1962 provably have no string-like logical operators in 3D. Suppose that we consider a stabilizer Hamiltonian that has topological quantum order (TQO) but does not have logical string-like operators. What can we say about its energy barrier? What can we say about its memory time? We need to define “TQO” and “string-like”. For simplicity, consider only translationally invariant Hamiltonians. Definition: A finite cluster of defects C is called neutral iff C can be created from the vacuum by acting on a finite number of qubits. Otherwise C is called charged. For example, a single semi-infinite string would be a charged cluster. Definition: TQO iff 1) orthogonal ground states are indistinguishable on scale [latex]ll L[/latex]; 2) Any neutral cluster of defects C can be created from the vacuum by acting on qubits inside the smallest enclosing box [latex]b(C)[/latex] and its constant neighborhood. Now define string-like. Consider a Pauli operator P. Suppose it creates defects only inside two boxes A1 and A2. Then a logical string segment P is called trivial iff both anchors are topologically neutral. Trivial string segments can be localized by multiplying them with stabilizers. These aren’t actually strings. The aspect ration [latex]alpha[/latex] is the ratio of the distance between the anchors divided by the size of the anchor regions. The no-strings rule: there exists a constant [latex]alpha[/latex] such that any logical string segment with aspect ratio greater than [latex]alpha[/latex] is trivial. Main result, theorem 1, Any sequence of local Pauli errors mapping a ground state to an orthogonal ground state must cross a logarithmic energy barrier. The constant implicit in front of the logarithmic scaling depends only the aspect ratio (no-strings constant) and the local site dimension. (The original Haah code has [latex]alpha = 15[/latex].) This bound is tight and gives the [latex]exp(beta^2)[/latex] scaling for the minimum energy decoder. The next result gives a similar result for the more practical RG decoder. Sketch of the proof of theorem 1. No- strings rule implies that local errors can move charged clusters of defects only a little bit. Dynamics of neutral clusters is irrelevant as long as such clusters remain isolated. The no- strings rule is scale invariant, so use RG approach to analyze the energy barrier. Define a notion of “sparse” and “dense” syndrome. Lemma: dense syndromes are expensive in terms of energy cost. Any particular error path will have some sparse and some dense moments as you follow the path. Now do some RG flow. Keep only the dense points in the path. This sparsifies some of the dense ones at the next level of the RG hierarchy. Repeat. Use the no- strings rule to help localize errors. In conclusion: All stabilizer Hamiltonians in 2D local geometries have string-like logical operators. The energy barrier does not grow with the lattice size. Some 3D Hamiltonians have no string-like logical operators. Their energy barrier grows logarithmically with the size of the lattice. For small temperature [latex]T[/latex], the memory time grows exponentially with [latex]1/T^2[/latex]. The optimal lattice size grows exponentially with [latex]1/T[/latex].

Tzu-Chieh Wei, Measurement-based Quantum Computation with Thermal States and Always-on Interactions, Phys. Rev. Lett. 107 060501 (2011); arXiv:1102.5153. Joint work with Li, Browne, Kwek & Raussendorf.

Can we do MBQC with a gapped Hamiltonian where [latex]Deltabeta ll 1[/latex] where [latex]Delta[/latex] is the energy gap? Consider the toy model of the tri-valent AKLT state on a honeycomb lattice. Start with a bunch of spin-1/2 singlets on the edges and project onto the space with zero angular momentum at each vertex. If we consider the toy model of a single unit cell, then we see some features: the model is gapped (of course) and the Hamiltonian evolution is periodic (of course). So, if we perform operations at regular intervals, perhaps we can get MBQC to work if [latex]beta Delta ll 1[/latex]. We need to reduce the dimensionality of the central spin from 4 states to 2 states. We can use projection filtering to get this down to a GHZ state along a randomly chosen quantization axis (either of [latex]x,y,z[/latex]). Distill a cluster state by measuring the POVM on the center particles. Then measure the bond particles. We can extend to 3D as well. This deterministically distills a 3D cluster state. As for fault-tolerance, you can use the Raussendorf-Harrington-Goyal scheme.

Anne E. B. Nielsen, Infinite-dimensional matrix product states, arXiv:1109.5470, joint work with Cirac and Sierra.

Recall the definition of an iMPS, we take the [latex]Dtoinfty[/latex] limit of a standard matrix product state. Let’s consider iMPS constructed from conformal fields. Motivation: want to identify families of states useful for investigating particular problems in quantum many-body systems. Why [latex]Dtoinfty[/latex] limit, and why conformal fields? Want to describe critical systems. Entanglement entropy is not limited by the area law. Power law decay of spin-spin correlation fucnctions. Long-range interactions are possible. Mathematical tools from CFT are useful. Sometimes it is possible to derive analytical results. Let’s restrict to a special model, [latex]mathrm{SU}(2)_k[/latex] WZW model. There are some chiral primal fields [latex]phi_{j,m}(z)[/latex], and spin values range in [latex]j in 0,1/2,ldots,k/2[/latex] and [latex]m[/latex] is the z-component. There is a closed form of the wavefunction for [latex]k=1, j=1/2[/latex] and where the [latex]z_i[/latex] are uniform on the unit circle in the complex plane. Can compute the Rényi 2-entropy, the two-point spin correlation function. We can also derive a parent Hamiltonian for this model. We use two properties: the null field and the Ward identity. This Hamiltonian is non-local but 2-body. We can recover the Haldane-Shastry model in the uniform case. They computed the two- and four-point correlation functions.

David Sénéchal, The variational cluster method.

Ultimate goal: solve the Hubbard model, i.e. the simplest model of interacting electrons that you can write down. In particular, we are interested in the Green’s functions, single particle properties and the susceptibilities (2-particle properties). We base the cluster method on a tiling of the lattice (i.e. the superlattice). We solve for local information first in each block and then try to restore the connections between the blocks. We can rewrite this using a Fourier transform and parameterize in terms of two wave vectors: a discrete cluster wave vector (telling you which cluster you’re in) and a reduced wave vector (continuous). Now we decompose the Hamiltonian into a cluster term plus a perturbation which has inter-cluster hopping terms. Then we want to treat this at lowest order in perturbation theory. The approximate Green’s function at this level of approximation is very simple: take the cluster Green function and subtract the perturbation to get the total Green’s function. The lattice self-energy is approximately the cluster self-energy. Note that this method breaks translation invariance. But this can be “restored” by completing the Fourier transform (i.e. Fourier transform over the lattice sites). Then we can “periodize” the Green’s function. (This seems to be the right way to do it, rather than doing this to the self-energy, at least at half-filling where we have an exact solution for 1D.) Cluster perturbation theory (CPT) is exact when [latex]U=0[/latex] and when [latex]t=0[/latex], and also gives exact short-range correlations. It allows a continuum of wavevectors. However, it doesn’t allow long-range order of broken symmetry. The approximation is controlled by the size of the cluster, and finite-size effects are important. There is no self-consistency condition (unlike dynamical mean-field theory). One idea is to try to capture a broken symmetry in CPT is to add a Weiss field. To set the relative weight of this term you would need some kind of principle, such as energy minimization. Instead, we use an idea due to Potthoff (M. Potthoff, Eur. Phys. J. B 32 429 (2003)). We define a functional of the Green’s function with the property that varying with respect to G gives the self-energy, and add some terms so that we get the Dyson equation. The value at the stationary point is the grand potential. There are now several approximation schemes: we could approximate the Euler equation (the equation for the stationary solution); we could approximate the functional (Hartree-Fock, etc.); we could restrict the variational space but keep the functional exact. We focus on the third method (following Potthoff). Potthoff suggested using the self-energy instead of the Green’s function and used the Legendre transform. This functional is universal in the sense that its functional form only depends on the interaction part. We can introduce a reference system which differs from the original Hamiltonian by one-body terms only. Suppose that we can solve this new Hamiltonian exactly. Then at the physical self-energy we can compute the value of the grand potential. Thus, we can find the dependence of the grand potential on the Green’s function. Showed some examples with Néel antiferromagnetism, superconductivity and a dynamical example studying the Mott transition.

Guillaume Duclos-Cianci, Local equivalence of topological order: Kitaev’s code and color codes, arXiv:1103.4606, joint work with H. Bombin and D. Poulin.

Focus on the toric code to be concrete. The main result is the following theorem: All 2D topological stabilizer codes are equivalent, meaning, there exists a local unitary mapping to a certain number of copies of Kitaev’s toric code. First review the toric code and the fact that the excitations are mutually bosonic with semionic exchange statistics and the notion of topological charge (i.e. equivalence classes of excitation configurations.) Now consider the decoding problem: given a configuration of defects, how can we decide which equivalence class we started in? Now define color codes. Gave a lot of details of the mapping taking the color code to two copies of the toric code, but I would really need some pictures to give an adequate description. Showed some results on the threshold for decoding using the mapping.

Philippe Corboz, Recent progress in the simulation of strongly correlated systems in two dimensions with tensor network algorithms, several papers, joint work with Bauer, Troyer, Mila, Vidal, White, Läuchli & Penc.

Want to use tensor networks to study strongly correlated systems. Typically, we use quantum Monte Carlo (QMC), but this fails for fermionic or frustrated systems because of the sign problem, so it is important to look for alternatives. To model fermions with a tensor network, we need to take the exchange statistics into account, and this can be done. Consider the t-J model: does the tensor network approach reproduce the “striped” states observed in some cuprates? DMRG (wide ladders) say YES, while variational and fixed-node Monte Carlo say NO. What does iPEPS say? It says stripes! Another example: the [latex]mathrm{SU}(N)[/latex] Heisenberg models on a 2D square lattice. For $latex N=2$ we know there is Néel order. The iPEPS result reproduces known results from QMC in this case; a good start. There are problems fitting as you increase the bond dimension $latex D$ and it is difficult to extrapolate. For $latex N=3$ and $latex N=4$ the sign problem means you can’t use QMC. Focus on the case $latex N=4$. Here they find a new ground state which has considerably lower energy than previous variational results. The new ground state exhibits dimerization, and the color variation across each dimer exhibits Néel order. (“Dimer-Néel order”) Here the colors just refer to the 4 different degrees of freedom. This seems to be the predicted ground state as opposed to the plaquette state predicted by linear flavor-wave theory.