Quantum Information in Quantum Many-Body Physics — Live Blogging Day 1

For the rest of the week I’m at the very interesting conference in Montreal on quantum information and quantum many-body physics. I’ll be live blogging the event. Enjoy!

Quantum Information in Quantum Many-Body Physics

Alioscia Hamma, Entanglement in Physical States, arXiv:1109.4391. Joint work with P. Zanardi and S. Santhra.

Some motivation: studying the foundations of statistical mechanics using quantum information. Want to investigate the role of entanglement in stat mech. Goal of this talk is to revise some of the recent work by Winter et al. which studies stat mech using Haar-random states and instead consider some subset of just the “physical” states, e.g. states which are subject to a local Hamiltonian evolution. Thermalization in a closed quantum system is a vexing philosophical problem. Let’s focus on entanglement; now if we look locally at the system we see a mixed state even if the global state is pure. Can we explain thermodynamics via this entanglement entropy? For a Haar-random pure state, the reduced density operator is almost maximally mixed. Now let’s define a global Hamiltonian with a weak coupling to a bath. Then the reduced state is close to the Gibbs state. However, just by a simple counting argument (Poulin, Quarry, Somma, Verstraete 2011) most physical states (i.e. ones subject to evolution with respect to a local Hamiltonian) are far from typical Haar- random states. What Alioscia et al. want to do is start from completely separable pure states and evolve with respect to a local Hamiltonian for a polynomial amount of time. Now study the entanglement in such a system. We start with a probability measure [latex]p[/latex] on the set of subsets of qudits. Now sample a subset and evolve for an infinitesimal amount of time. Now take a new iid sample and evolve again. Look at the reduced density operator and compute the purity, [latex]mathrm{Tr}(rho^2)[/latex]. Now let’s compute the mean and variance of the purity. To do this, use a superoperator formalism. Use a so-called random edge model. If you put the system on a lattice, you find that the average purity decays with an area law scaling. Next, consider a linear chain. A chain of length L cut into two pieces A and B. Instead of choosing edges from a random edge model, use a random local unitary on each site chosen in a random order (it turns out this order doesn’t matter much). Now repeatedly apply these random local unitaries to each site on the chain. The purity decays exponentially in the number of time steps. The average Rényi 2-entropy is nearly [latex]k log d – log 2[/latex] where [latex]k[/latex] is the number time steps and [latex]d[/latex] is the local site dimension. If we look at the [latex]ktoinfty[/latex] limit we get the maximally mixed state. They were not able to compute the gap of this superoperator yet, so we don’t know the rate of convergence. When the number of time steps is constant, we have an area law. Working on 2D and 3D lattices now. An open questions is how to take an energy constraint into account so that one might be able to recover a Gibbs state. Question period: David Poulin suggests keeping the unitary according to a Metropolis-Hastings rule.

Sergey Bravyi, Memory Time of 3D spin Hamiltonians with topological order, Phys. Rev. Lett. 107 150504 (2011); arXiv:1105.4159, joint work with Jeongwan Haah.

Want to store quantum information in a quantum many-body system for a long time. In a self-correcting memory, we want the natural physics of the system to do error correction for us. An example of a classical self-correcting memory: 2D ferromagnetic Ising model. There is a macroscopic energy barrier which prevents transitions between the ground states. It’s no good as a quantum memory because different ground states can be distinguished locally. (We need topological order.) Begin with an encoding of the information, i.e. an initialization of the system. To model the evolution, we use Metropolis dynamics. If a spin flip would decrease the energy, then flip it; otherwise, flip it with probability [latex]exp(-beta Delta E) [/latex]. If we are below the critical temperature, then the lifetime of the information is long, otherwise it is exponentially short. Now go to the quantum case. We will consider 2D and 3D quantum codes. We consider only those with Hamiltonians where the local operators are supported on a unit cell (locality). All the stabilizers must commute and must be a tensor product of Pauli operators. The ground state is frustration-free. Initial state of the quantum memory is some ground state. Now we consider dynamics. Restrict to Markovian dynamics, i.e. a master equation in Lindblad form. We will restrict the jump operators to be local with bounded operator norm, and the total number of jumps should be only of order [latex]n[/latex] (the number of spins). We require that the fixed point of the master equation is the Gibbs state and also detailed balance. Define the Liouville inner product [latex]langle X , Y rangle_beta = mathrm{Tr}( exp(-beta H) X^dagger Y )[/latex]. Detailed balance amounts to [latex]langle X , mathcal{L}(Y) rangle_beta = langle mathcal{L}(X) , Y rangle_beta[/latex]. As for decoding, we measure the syndrome and correct the errors. We measure the eigenvalue of each of the stabilizer operators. Whenever we record a -1 eigenvalue, we call that a defect. The decoder’s goal is trying to annihilate the defects in the way which is most likely to return the memory to its original state. Let’s ignore the computational complexity of the decoding algorithm for the moment. Then a good decoder is a minimum energy decoder, a la Dennis, Kitaev, Landahl, Preskill. (Why not consider free-energy minimizer instead?) Rather than use this decoder, we use the RG decoder of Harrington (PhD thesis) and Duclos-Cianci and Poulin (2009). To do this, identify connected clusters of errors and find the minimum enclosing box which encloses the errors. Then try to erase the cluster by finding minimum energy error within the boxes. Some of the boxes will be corrected, some will not. Now coarse-grain by a factor of two and try again. Stop when there are no defects left. Now apply the product of all the recorded erasing operators. If it returns the system to the ground state, declare success. Declare failure if the size increases to the linear size of the lattice and there are still some defects left. Now let’s define the energy barrier so that a high energy barrier gives high storage fidelity. A stabilizer Hamiltonian has energy barrier [latex]B[/latex] iff any error path of local operators mapping a ground state to an orthogonal ground state has energy cost at least [latex]B[/latex]. Lemma 1, the trace norm distance between the output of the minimum energy decoder and the original ground state is upper bounded by [latex] lVert Phi_{mathrm{ec}}(rho(t)) – rho(0)rVert le O(tN) sum_{mge B} {N choose m} exp(-beta m) ,.[/latex] How large does the energy barrier need to be to get self-correcting properties? Clearly a constant [latex]B[/latex] is insufficient. What about [latex]B sim log N[/latex]? Plug this into the pervious bound and try to pick the system size (for a fixed temperature) that minimizes the RHS. Choose the system size to scale like [latex]N_beta sim exp{beta/2}[/latex] and you get that the storage infidelity decreases like [latex]exp{beta^2}[/latex]. This is a “marginal” self-correcting memory. There is a size of the system which is “too big” beyond which you lose the self-correction properties, but for the right size you get good properties. Contrast this with systems that have logical string-like operators: applying a string-like operator to the vacuum creates a pair of defects at the end points of a string. Then the energy barrier is [latex]O(1)[/latex] and the system can’t be self correcting. The memory time is [latex]O(exp(beta))[/latex]. All 2D stabilizer Hamiltonians have these string-like logical operators. In 3D, all previous attempts had string- like excitations as well. Recent results by Haah arXiv:1101.1962 provably have no string-like logical operators in 3D. Suppose that we consider a stabilizer Hamiltonian that has topological quantum order (TQO) but does not have logical string-like operators. What can we say about its energy barrier? What can we say about its memory time? We need to define “TQO” and “string-like”. For simplicity, consider only translationally invariant Hamiltonians. Definition: A finite cluster of defects C is called neutral iff C can be created from the vacuum by acting on a finite number of qubits. Otherwise C is called charged. For example, a single semi-infinite string would be a charged cluster. Definition: TQO iff 1) orthogonal ground states are indistinguishable on scale [latex]ll L[/latex]; 2) Any neutral cluster of defects C can be created from the vacuum by acting on qubits inside the smallest enclosing box [latex]b(C)[/latex] and its constant neighborhood. Now define string-like. Consider a Pauli operator P. Suppose it creates defects only inside two boxes A1 and A2. Then a logical string segment P is called trivial iff both anchors are topologically neutral. Trivial string segments can be localized by multiplying them with stabilizers. These aren’t actually strings. The aspect ration [latex]alpha[/latex] is the ratio of the distance between the anchors divided by the size of the anchor regions. The no-strings rule: there exists a constant [latex]alpha[/latex] such that any logical string segment with aspect ratio greater than [latex]alpha[/latex] is trivial. Main result, theorem 1, Any sequence of local Pauli errors mapping a ground state to an orthogonal ground state must cross a logarithmic energy barrier. The constant implicit in front of the logarithmic scaling depends only the aspect ratio (no-strings constant) and the local site dimension. (The original Haah code has [latex]alpha = 15[/latex].) This bound is tight and gives the [latex]exp(beta^2)[/latex] scaling for the minimum energy decoder. The next result gives a similar result for the more practical RG decoder. Sketch of the proof of theorem 1. No- strings rule implies that local errors can move charged clusters of defects only a little bit. Dynamics of neutral clusters is irrelevant as long as such clusters remain isolated. The no- strings rule is scale invariant, so use RG approach to analyze the energy barrier. Define a notion of “sparse” and “dense” syndrome. Lemma: dense syndromes are expensive in terms of energy cost. Any particular error path will have some sparse and some dense moments as you follow the path. Now do some RG flow. Keep only the dense points in the path. This sparsifies some of the dense ones at the next level of the RG hierarchy. Repeat. Use the no- strings rule to help localize errors. In conclusion: All stabilizer Hamiltonians in 2D local geometries have string-like logical operators. The energy barrier does not grow with the lattice size. Some 3D Hamiltonians have no string-like logical operators. Their energy barrier grows logarithmically with the size of the lattice. For small temperature [latex]T[/latex], the memory time grows exponentially with [latex]1/T^2[/latex]. The optimal lattice size grows exponentially with [latex]1/T[/latex].

Tzu-Chieh Wei, Measurement-based Quantum Computation with Thermal States and Always-on Interactions, Phys. Rev. Lett. 107 060501 (2011); arXiv:1102.5153. Joint work with Li, Browne, Kwek & Raussendorf.

Can we do MBQC with a gapped Hamiltonian where [latex]Deltabeta ll 1[/latex] where [latex]Delta[/latex] is the energy gap? Consider the toy model of the tri-valent AKLT state on a honeycomb lattice. Start with a bunch of spin-1/2 singlets on the edges and project onto the space with zero angular momentum at each vertex. If we consider the toy model of a single unit cell, then we see some features: the model is gapped (of course) and the Hamiltonian evolution is periodic (of course). So, if we perform operations at regular intervals, perhaps we can get MBQC to work if [latex]beta Delta ll 1[/latex]. We need to reduce the dimensionality of the central spin from 4 states to 2 states. We can use projection filtering to get this down to a GHZ state along a randomly chosen quantization axis (either of [latex]x,y,z[/latex]). Distill a cluster state by measuring the POVM on the center particles. Then measure the bond particles. We can extend to 3D as well. This deterministically distills a 3D cluster state. As for fault-tolerance, you can use the Raussendorf-Harrington-Goyal scheme.

Anne E. B. Nielsen, Infinite-dimensional matrix product states, arXiv:1109.5470, joint work with Cirac and Sierra.

Recall the definition of an iMPS, we take the [latex]Dtoinfty[/latex] limit of a standard matrix product state. Let’s consider iMPS constructed from conformal fields. Motivation: want to identify families of states useful for investigating particular problems in quantum many-body systems. Why [latex]Dtoinfty[/latex] limit, and why conformal fields? Want to describe critical systems. Entanglement entropy is not limited by the area law. Power law decay of spin-spin correlation fucnctions. Long-range interactions are possible. Mathematical tools from CFT are useful. Sometimes it is possible to derive analytical results. Let’s restrict to a special model, [latex]mathrm{SU}(2)_k[/latex] WZW model. There are some chiral primal fields [latex]phi_{j,m}(z)[/latex], and spin values range in [latex]j in 0,1/2,ldots,k/2[/latex] and [latex]m[/latex] is the z-component. There is a closed form of the wavefunction for [latex]k=1, j=1/2[/latex] and where the [latex]z_i[/latex] are uniform on the unit circle in the complex plane. Can compute the Rényi 2-entropy, the two-point spin correlation function. We can also derive a parent Hamiltonian for this model. We use two properties: the null field and the Ward identity. This Hamiltonian is non-local but 2-body. We can recover the Haldane-Shastry model in the uniform case. They computed the two- and four-point correlation functions.

David Sénéchal, The variational cluster method.

Ultimate goal: solve the Hubbard model, i.e. the simplest model of interacting electrons that you can write down. In particular, we are interested in the Green’s functions, single particle properties and the susceptibilities (2-particle properties). We base the cluster method on a tiling of the lattice (i.e. the superlattice). We solve for local information first in each block and then try to restore the connections between the blocks. We can rewrite this using a Fourier transform and parameterize in terms of two wave vectors: a discrete cluster wave vector (telling you which cluster you’re in) and a reduced wave vector (continuous). Now we decompose the Hamiltonian into a cluster term plus a perturbation which has inter-cluster hopping terms. Then we want to treat this at lowest order in perturbation theory. The approximate Green’s function at this level of approximation is very simple: take the cluster Green function and subtract the perturbation to get the total Green’s function. The lattice self-energy is approximately the cluster self-energy. Note that this method breaks translation invariance. But this can be “restored” by completing the Fourier transform (i.e. Fourier transform over the lattice sites). Then we can “periodize” the Green’s function. (This seems to be the right way to do it, rather than doing this to the self-energy, at least at half-filling where we have an exact solution for 1D.) Cluster perturbation theory (CPT) is exact when [latex]U=0[/latex] and when [latex]t=0[/latex], and also gives exact short-range correlations. It allows a continuum of wavevectors. However, it doesn’t allow long-range order of broken symmetry. The approximation is controlled by the size of the cluster, and finite-size effects are important. There is no self-consistency condition (unlike dynamical mean-field theory). One idea is to try to capture a broken symmetry in CPT is to add a Weiss field. To set the relative weight of this term you would need some kind of principle, such as energy minimization. Instead, we use an idea due to Potthoff (M. Potthoff, Eur. Phys. J. B 32 429 (2003)). We define a functional of the Green’s function with the property that varying with respect to G gives the self-energy, and add some terms so that we get the Dyson equation. The value at the stationary point is the grand potential. There are now several approximation schemes: we could approximate the Euler equation (the equation for the stationary solution); we could approximate the functional (Hartree-Fock, etc.); we could restrict the variational space but keep the functional exact. We focus on the third method (following Potthoff). Potthoff suggested using the self-energy instead of the Green’s function and used the Legendre transform. This functional is universal in the sense that its functional form only depends on the interaction part. We can introduce a reference system which differs from the original Hamiltonian by one-body terms only. Suppose that we can solve this new Hamiltonian exactly. Then at the physical self-energy we can compute the value of the grand potential. Thus, we can find the dependence of the grand potential on the Green’s function. Showed some examples with Néel antiferromagnetism, superconductivity and a dynamical example studying the Mott transition.

Guillaume Duclos-Cianci, Local equivalence of topological order: Kitaev’s code and color codes, arXiv:1103.4606, joint work with H. Bombin and D. Poulin.

Focus on the toric code to be concrete. The main result is the following theorem: All 2D topological stabilizer codes are equivalent, meaning, there exists a local unitary mapping to a certain number of copies of Kitaev’s toric code. First review the toric code and the fact that the excitations are mutually bosonic with semionic exchange statistics and the notion of topological charge (i.e. equivalence classes of excitation configurations.) Now consider the decoding problem: given a configuration of defects, how can we decide which equivalence class we started in? Now define color codes. Gave a lot of details of the mapping taking the color code to two copies of the toric code, but I would really need some pictures to give an adequate description. Showed some results on the threshold for decoding using the mapping.

Philippe Corboz, Recent progress in the simulation of strongly correlated systems in two dimensions with tensor network algorithms, several papers, joint work with Bauer, Troyer, Mila, Vidal, White, Läuchli & Penc.

Want to use tensor networks to study strongly correlated systems. Typically, we use quantum Monte Carlo (QMC), but this fails for fermionic or frustrated systems because of the sign problem, so it is important to look for alternatives. To model fermions with a tensor network, we need to take the exchange statistics into account, and this can be done. Consider the t-J model: does the tensor network approach reproduce the “striped” states observed in some cuprates? DMRG (wide ladders) say YES, while variational and fixed-node Monte Carlo say NO. What does iPEPS say? It says stripes! Another example: the [latex]mathrm{SU}(N)[/latex] Heisenberg models on a 2D square lattice. For $latex N=2$ we know there is Néel order. The iPEPS result reproduces known results from QMC in this case; a good start. There are problems fitting as you increase the bond dimension $latex D$ and it is difficult to extrapolate. For $latex N=3$ and $latex N=4$ the sign problem means you can’t use QMC. Focus on the case $latex N=4$. Here they find a new ground state which has considerably lower energy than previous variational results. The new ground state exhibits dimerization, and the color variation across each dimer exhibits Néel order. (“Dimer-Néel order”) Here the colors just refer to the 4 different degrees of freedom. This seems to be the predicted ground state as opposed to the plaquette state predicted by linear flavor-wave theory.

3 Replies to “Quantum Information in Quantum Many-Body Physics — Live Blogging Day 1”

  1. Quick comment on decoding and the Bravyi-Haah results. Minimum-energy decoding a la DKLP is poly-time, while free-energy decoding appears to be exponential-time. Both are relevant for academic studies only, in my opinion, such as relating decoding to phase transitions in associated stat. mech. models or characterizing the “ultimate limits” of decoding performance. In practice, it is necessary to trade performance for speed, and a much faster decoder such as the RG decoders mentioned are good candidates.

  2. In order to check whether the decoding algorithm has been able to correct an error on the toric code/surface code, one needs to check whether the product of the error E and correction operator R, belongs to the identity equivalence class. Given a surface code lattice, how does one take this operator R.E and check that its action is topologically trivial (i.e. it belongs to the identity equivalence class) ?

    1. In the above comment, I meant to ask:- how does one write an efficient numerical algorithm to check that R.E belongs to the trivial equivalence class ?

Leave a Reply

Your email address will not be published. Required fields are marked *