QIP 2012 Day 1

Sergey Bravyi, Topological qubits: stability against thermal noise.
Joint work with Jeongwan Haah, based on arXiv:1105.4159.


Sergey opened his talk with an image of a bagel in a Solfatara landscape, meant to evoke topogically-protected qubits (get it? a bagel?) in a thermal (steam) bath.
The question is whether we can engineer an energy landscape with high barriers between encoded states so as to enable passive quantum error correction; i.e. to create a topologically protected qubit. Here are the main ingredients:

  • Stabilizer codes on lattice with local generators and code distance $latex dapprox L$, where $latex L$ is the lattice size.
  • A code Hamiltonian with energy penalty for states outside logical subspace. The Hamiltonian is defined simply by having a violated stabilizer cost unit energy.
  • Thermal noise: should be described by a Markovian master equation with local jump operators. The spectral density should obey detailed balance; this will be the place where temperature enters.
  • Finally, there should be a decoder. Here we consider the RG (renormalization group) decoder.

Kitaev’s toric code (1997) was the first example of a topological QECC. If you’re not familiar with it, then you should read about that instead of the current results. The key idea is that the errors are strings that wrap around the torus, and so have length at least $latex L$. However, since a partial string incurs only a constant energy penalty, the memory time (=amount of time until the state degrades by a constant amount) is $latex e^{-2beta}$ [arXiv:0810.4584] independent of the lattice size! This is pretty disappointing; we can get this performance by not encoding the qubit at all.
Indeed, any 2D topological stabilizer code has constant energy barrier [0810.1983], which implies that relaxation to thermal equilibrium occurs at a rate independent of system size $latex L$.
This no-go result is daunting, but in higher dimensions we can still achieve some nontrivial topological protection. Here’s what we want: A topological stabilizer codes in a D-dimensional array with commuting Pauli stabilizers $latex G_a$, each with support of size O(1), and with each qubit acted on by O(1) stabilizers. The ground space of the corresponding Hamiltonian is the set of states that are +1 eigenvectors of every stabilizer. We would like the distance to grow at least linearly with lattice size; such models have intrinsic robustness.
Fortunately, there is a shining example of a spatially local stabilizer code with all the self-correcting properties we want. Unfortunately, it requires D=4; it is the 4-d toric code. Here is its energy landscape.

Defects are loops around error membranes, and any sequence of single-qubit Pauli errors mapping a ground state to an orthogonal one must create roughly $latex L$ defects at some step. Thus the memory time is exponential in $latex L$ for low $latex T$ [0811.0033].
All 3D stabilizer Hamiltonians studied so far have constant energy barrier. Indeed this is true of all 3D codes that are invariant under translation and “scale invariance” [1103.1885]. (Here, scale invariance means that the ground state degeneracy doesn’t depend on the size of the lattice.) Haah’s breakthrough 1101.1962 is the first example of a 3-d stabilizer code without string-like logical operators. In his model, defects cannot move very far.
Our results give self-correcting properties of any topological stabilizer codes that obey the no-strings rule, e.g. Haah’s codes.

  1. Their energy barrier is $latex Omega(log L)$
  2. They have partial self correction. Specifically, the Arrhenius law together with the logarithmic energy barrier gives lifetime $latex L^{cbeta}$, for any inverse temperature $latex beta>0$, and for $latex Lleq L^*$ for some critical value $latex L^*sim e^{beta/3}$. See figure:

This lifetime reaches a maximum value (as a function of L) when L is proportional to $latex e^{beta/3}$. Thus, choosing L optimally results in a memory lifetime of $latex T_{mathrm{mem}} sim e^{cbeta^3/3}$.
One example of a code with no string-like errors is the 3-d cubic code of Haah. It is obtained by placing two qubits at each site of a cubic lattice and tiling the following stabilizer generators. See figure (from the Bravyi-Haah paper).

The noise model: One can get a Markovian noise model by considering the Davies weak coupling limit, where the Lindblad operator $latex A_{k,omega}$ transfers energy $latex omega$ from the system to the bath. The spectral density $latex r(omega)$ obeys detailed balance: $latex r(omega) = exp(-beta omega) r(omega)$. Note that this is the only place where temperature enters.
The decoder: After measuring the syndrome, the correction operator is found using ideas from Jim Harrington’s thesis [2004; I can’t find it online] and the Renormalization Group decoder from 1006.1362. See photo.
The strategy is to:

  1. Find connected clusters
  2. For each connected cluster C find the min enclosing box and try to annihilate C by an operator acting within the box.
  3. Stop if no defects left.
  4. If some defects are left, then increase the length by factor of 2 and repeat.

The RG decoder stops when all defects have been annihilated or when length reaches the lattice size, in which case we have failure. We can also have failure when all defects have been annihilated but the system has been returned to a wrong ground state.
The RG decoder can be implemented in time poly(L) (and can be parallelized to run in time log(L)).
We can define the memory time as follows. We time evolve according to the master equation given the local quantum jump Lindblad operators. After some time $latex t$, we run the decoder. Then we check the worst-case (over all initial states) of the trace distance between the initial and final state.
$latex | mathcal{D}(rho(t)) – rho(0) |_1 le O(t) N 2^k exp(-beta m) (1+exp(-beta))^N.$
Here N is the number of physical qubits, k is the number of logical qubits, t is the time, $latex beta$ is inverse temperature and m is the number of errors that can be corrected by the decoder (or more precisely, the height of the energy barrier.) Essentially, there is a temperature/energy factor that competes with an entropic factor.
Analyzing it we find that the memory time grows greater than $latex L^(c beta)$ as long as $latex L le exp(beta/3)$
This gives large but finite lower bound on memory time at a fixed temperature.
Is it optimal? Maybe the analysis is pessimistic and the Haah code actually has an exponentially large lifetime? Maybe growing L can result in unbounded memory time when $latex beta$ is sufficiently large? The authors carried out a Monte Carlo simulation of Haah code with only X errors to test these theories. It used 1000 days of CPU time on IBM’s Blue Gene.
The plots are qualitatively similar to the lower bounds on memory time. In particular, they show that the maximum memory time scales quadratically with $latex beta$ and that the memory time for fixed temperature increases with $latex L$ and then starts to decrease.
How do the proofs go?
Idea 1: The no-strings rule implies localization of errors. That is, any error E can be written as $latex E=E_{loc} cdot G$ where G is a stabilizer and $latex E_{loc}$ is supported on at most 2 neighborhoods.
In order for the accumulated error to have large weight at least one intermediate syndrome must be nonsparse, with one pair of defects within distance a of each other.
Idea 2: Uses scale invariance of the nostrings rule.
Define sparseness and denseness at different scales. A syndrome which is dense at p consecutive scales will include a cluster of p defects.
Show that to implement a logical operation, at least one intermediate syndrome must be dense at roughly log(L) differnet scales.

Mark Wilde,
Advances in classical communication for network quantum information theory, based on 1109.0802 and 1102.2624

Consider the problem of an interference channel, with two senders and two receivers. This problem is hard enough in the “ccqq” case, meaning that the channel takes two classical inputs to a bipartite quantum output: i.e. $latex x_1,x_2 rightarrow rho_{x_1,x_2}^{B_1B_2}$. This is depicted here (figure taken from their paper).

Along the way to understanding this, we’ll need to consider a multiple-access channel, with two senders and one receiver. The achievable rate region is in a sense an optimistic one, bounded by the obvious constraints:

$latex R_1 leq I(X_1;B|X_2)$
$latex R_2 leq I(X_1;B|X_2)$
$latex R_1 + R_2 leq I(X_1X_2;B)$

Why are these obvious? The first two constraints are what we would obtain if the receiver has successfully decoded one message, and both parties get to use that information to design a joint encoder-decoder pair. The last constraint is what we would get if the two senders could work together, apart from the requirement that their average input to the channel should be product across the two senders. Thus, these are natural upper buonds. The nonobvious part is that this can be achieved.
The coding theorem was obtained by Andreas Winter, when he was a grad student (quant-ph/9807019). How does it work? First, we observe that the desired rate region can be obtained as a convex combination of the “corner points” of the rate region: $latex (I(X_1;B), I(X_2;B|X_1))$ and $latex (I(X_1;B|X_2), I(X_2;B))$.
To achieve one of these corner points, use the fact that the two senders are sending uncorrelated inputs, so we can average over the inputs of one sender (say $latex X_2$) to obtain an effective single-sender/single-receiver channel, for which coding is possible at rate $Iatex I(X_1;B)$. Of course, measurement is potentially destructive, but since we are operating in the regime of very low error, we can use the fact that measurements with nearly certain outcomes cause very little damage (the infamous “Tender Measurement Lemma”). (Ok, it’s mostly called the “gentle measurement lemma” but I like “tender.”) Thus, the decoder obtains $latex X_1$ and, conditioned on it, can decode $latex X_2$. (Clearly the sender knows $latex X_1$ as well. Note that this is no longer true for the multiple-sender case.)
Sequential Decoding:. To do decoding, we need to use the typical projector for the average state, as well as the conditionally typical projectors for codewords. The HSW idea is to use the pretty good measurement. The idea of sequential decoding is, in analogy with the classical idea, is to check each codeword sequentially. It works out pretty similarly, with the measurement operators in both cases being lightly distorted versions of the conditionally typical projectors.
The crucial lemma making this possible is from 1109.0802, by Pranab Sen. He calls it a “noncommutative union bound”. The statement is that

$latex 1- mathrm{tr}(Pi_1cdotsPi_n rho) le 2 sqrt{sum_{i=1}^n mathrm{tr}((1-Pi_i) rho)}$

Successive decoding: In general, we’d like to analyze these problems of multipartite information theory using the traditional single sender-singler receiver tools, like HSW. Since each individual code works with exponentially small error, and the gentle-measurement Lemma states that decoding causes exponentially small damage, we should be able to compose several protocols without much trouble. The only subtlety is that the typical projectors don’t commute, which is where the noncommutative union bound comes in. We apply it to the following “intersection projectors.”
$latex tildePi_{x_1^n, x_2^n} leq Pi Pi_{x_1^n} Pi_{x_1^n, x_2^n} Pi_{x_1^n} Pi$
Most important open problem: find a general three-sender quantum simultaneous decoder. Solving this should hopefully yield the insights required to handle an unlimited number of senders.

Nilanjana Datta,
Quantum rate distortion, reverse Shannon theorems and source channel separation,
joint work with Min-Hsiu Hsieh, Mark Wilde, based on 1108.4940.

Compression and transmission: the source coding and noisy coding theorems of Shannon. The fundamental limit on compression is the entropy.
Lossless compression is often too stringent a condition. For example, consider jpg compression, which gives faithful images (to a human eye) despite throwing away lots of bits of information. We consider Rate Distortion Theory, i.e. the theory of lossy data compression. We are interested in the maximum rate of compression given a fixed maximum amount of distortion. Define the rate distortion function:
$latex R_c(D) =$ minimum classical asymptotic cost for sending many copies of state $rho$ with per-copy distortion $latex leq D$, where $latex c$ is for classical.
For a fixed value of the distortion function $latex 0le D < 1$, we work in the storage setting. Define a quantum rate distortion function in the asymptotic limit.
Barnum (in quant-ph/9806065) proved the first lower bound on the quantum rate distortion function:
$latex R_q(D) ge min_{S_rho(N,D)} I_c(rho,N)$
where the term on the right is the coherent information and $latex S_rho(N,D) = {N:mathrm{CPTP} : d(rho,N)le D}.$ But, the coherent information can be negative, so it can’t be tight lower bound in all cases.
Now move to the communication setting. Can define the entanglement-assisted quantum rate distortion function. They can find a single-letter formula for it, given by $latex min_{S_rho(N,D)} frac12 I(R:B)_omega$. (In terms of the quantum mutual information.) This is both a lower bound and it is achievable by using quantum state splitting. Alternatively, the achievability follows from the quantum reverse Shannon theorem (QRST).
Unassisted quantum rate distortion function can also be found by the using the QRST. Need to use a regularized formula.

Jon Yard,
Quantum communication with Gaussian channels of zero quantum capacity,
joint work with Graeme Smith and John A. Smolin, based on 1102.4580.

The title on arxiv.org is “Gaussian bosonic synergy: quantum communication via realistic channels of zero quantum capacity”. Realistic? Synergy?? Think about this, kids, before you go off to work in industrial research.
This paper concerns the fascinating topic of channels with zero quantum capacity. For zero classical capacity, these channels are simple to describe. Here is one example.

But zero quantum capacity occurs even for channels whose output depends on the input. For example, consider the completely dephasing channel, aka a classical channel. Obviously this has no classical capacity. The fact that this channel has zero capacity is both because it is anti-degradable (meaning that Eve’s output can simulate Bob’s; this implies 0-capacity by no-cloning) and because it is PPT, meaning it maps every input state to a PPT output state, or equivalently, its Jamiolkowski state (obtained by feeding in half of a maximally entangled state) is PPT. Sadly, these two conditions are still pretty much our only examples of zero-capacity channels. See a previous talk of Graeme’s for a framework that could potentially expand this set of conditions.
Let’s talk a bit more about those two examples. The anti-degradable channels are intuitively those that give more to Eve than Bob. So erasure channels with erasure rate >50% count, attenuation channels (for photons; can be modeled by a beamsplitter) and certain depolarizing channels (using an argument due to Brandao, Oppenheim, and Strelchuk http://arxiv.org/abs/1107.4385). On the other hand, PPT channels are at least easy to calculate, and include some channels with zero private capacity.
The general question of characterizing zero-capacity channels is a very interesting one, and one that it’s not clear if we are up to. But here is a nice specific version of the problem. The capacity of the depolarizing channel drops to zero at noise rate $latex p^*$, where $latex 0.2552 leq p^* leq 1/3$. What is $latex p^*$???
A good quote from Jon.

Superactivation demonstrates that asking about the capacity of a quantum channel is like asking about a person’s IQ: one number isn’t enough to capture everything you might want to know.

The main result: There exist Gaussian channels, each with zero quantum capacity, that can be combined to achieve nonzero capacity. These channels are more or less within the reach of current experiments (not sure about all the decoding), requiring about 10dB of squeezing and about 60 photons/channel. This is interesting in part because Gaussian channels seemed to be so well-behaved! For example, there is no NPT bound entanglement in Gaussian channels.
Infinite dimensions: This paper works in infinite dimensions, which many in our field are inherently uncomfortable with, and others are inexplicably drawn to. Like representation theory, or complexity classes with lots of boldface capital letters, the presence of phrases like “Quasi-free maps on the CCR algebra” can be either a good or a bad thing, depending on your perspective.
Q: For those of us who of prefer finite dimensions, can we truncate the Hilbert space, perhaps by taking advantage of a constraint on the total number of photons?
A: In principle yes, but then the channels are no longer Gaussian, so our analysis doesn’t work, and in general, Gaussian things are easier to analyze, so that is a sense in which infinite dimensions can actually be simpler to deal with.

Runyao Duan,
Bounds on the distance between a unital quantum channel and the convex hull of unitary channels, with applications to the asymptotic quantum Birkhoff conjecture,
joint work with Nengkun Yu and Quanhua Xu.

The story behind this work starts with the Birkhoff-von Neumann theorem (often called Birkhoff’s theorem) which states that doubly stochastic matrices (matrices with nonnegative entries and all rows and columns summing to one) are convex combinations of permutations. An analogous claim for quantum channels is that unital channels (i.e. mapping the maximally mixed state to itself) are mixtures of unitaries. However, this claim is false. More recently, Smolin, Verstraete and Winter conjectured in quant-ph/0505038 that many copes of a unital channel should asymptotically approach convex combinations of unitaries. (The rest of their paper contained evidence suggestive of this claim, and in general has nice results that are an important precursor of the merging revolution in quantum information theory.) This conjecture was called the Asymptotic Quantum Birkhoff Conjecture (AQBC) and was discussed on Werner’s open problems page. A few years later, it was shown to hold for everyone’s favorite unital-but-not-mixture-of-unitaries channel (What’s that? The one with Jamiolkowski state equal to the maximally mixed state on the antisymmetic subspace of $latex mathbb{C}^3 otimes mathbb{C}^3$ of course!) by Mendl and Wolf.
Sadly, the AQBC is also false, as Haagerup and Musat recently proved. However, their paper uses von Neumann algebras, and their abstact begins

We study factorization and dilation properties of Markov maps between von Neumann algebras equipped with normal faithful states, i.e., completely positive unital maps which preserve the given states and also intertwine their automorphism groups.

(Another approach, in preparation, is due to Ostrev, Oza and Shor.)
This raises a new open question: is there a nice simple proof that finite-dimension-loving quantum information people can follow without having to learn any new math? (Note: probably we should learn more about von Neumann algebras. Just like we should make our own pizza from scratch. But if there’s a good pizzeria that delivers, I’d still like to hear about it.)
This result delivers, by disproving the AQBC with an elegant, intuitive proof. The key technical lemma is the kind of tool that I can imagine being useful elsewhere. It states that, for the problem of distinguishing two convex sets of quantum channels, there exists a single measurement strategy that works well in the worst case. This builds on a similar lemma (due to Gutoski-Watrous ‘04 and Jain ‘05) is just minimax applied to state discrimination: If we want to distinguish two convex sets of density matrices, then there exist a single measurement that works well in the worst case. How well? Its performance is at least as good as what we get by choosing the worst pair of states from the two convex sets, and then choosing the optimal measurement based on these.
This paper extends this result to sets of quantum channels. The optimal distinguishability of a pair of quantum channels is given by their diamond-norm distance, which is simply the largest trace distance possible between their outputs when applied to one half of some (possibly entangled) state. So when you want to distinguish convex sets of channels, then they use minimax again to show that there exists a measurement strategy (this time consisting both of an input state to prepare AND a measurement on the output) that works well for any worst-case choice of input channels. This set of measurement strategies sounds potentially non-convex; however, Watrous’s SDP characterization of the diamond norm shows that measurement strategies form a convex set, so everything works out.
Next, this paper applies this to find ways to estimate the distance between a unital channel and the convex hull of unitary channels. To do this, they use a type of channel which they call a Schur channel (more commonly known as Schur multipliers): given a matrix $latex A$, define the channel $latex Phi_A(X) = A circ X$. Such channels are called Schur channels.
Thm:TFAE:

  1. $latex Phi_A$ is a Schur channel
  2. $latex Phi_A$ is unital with diagonal Kraus operators
  3. A is positive and $latex a_{k,k}=1$ for each $latex 1leq kleq d$.

Using Schur channels, and their discrimination lemma, they are able to make short work of the AQBC. Their next result is that any Schur channel for which conv(Kraus operators of the channel) doesn’t include any unitary is a counterexample to AQBC. This will follow from the fact that the closest random-unitary channel to a Schur channel can be taken (up to factor of 2 in the distance) to be a mixture of diagonal unitaries.
This work doesn’t tell us all of the AQBC-violating channels, but it does describe a large class of them.
They also mentioned connections to Grothendieck’s inequality (work in progress) and Connes’ embedding problem. Connections to these two topics were also mentioned in Haagerup and Musat’s work.

Markus Greiner, Quantum Magnetism with Ultracold Atoms – A Microscopic View of Artificial Quantum Matter.

Some motivation: many condensed matter models can’t be tested with current experiments. Even in simple models it is difficult to calculate basic quantities. Want to build new materials. The idea: use ultracold atoms in optical lattices to simulate condensed matter models. The dream of a quantum simulator is to build a special purpose device which is still useful before full-scale quantum computers are built.
He mentioned that quantum simulators are robust to errors… I believe that this is an open question which theorists should be working on.
How do we cool the atoms? First trap the atoms with lasers. For weakly interacting quantum gases, we can form a BEC where all of the particles can be well described by a single wavefunction. But for strongly correlated quantum gases there are interactions which preclude such a description. In an optical lattice, we have atoms interacting strongly on lattice sites. There is new physics here: for example, superfluid to Mott insulator transition. We can use the optical lattice to make synthetic matter: we can simulate electrons in a crystal. With fermionic atoms, we can observe a fermionic superfluid and things like the BEC-BCS crossover. Bose-Hubbard models and high-T_c superconductivity could also be probed in these systems. Also, quantum magnets, low-dimensional materials, etc.
Let’s focus on quantum magnetism. The main experimental tool is the quantum gas microscope. The goal is to take “bottom-up” tools like high-fidelity readout and local control from e.g. ion traps to more “top-down” systems like optical lattices. (This is clearly related to scalability of quantum computation, ultimately.) The quantum gas microscope can image florescence from single atoms trapped in the lattice. To image this, they measure parity.
He then showed a movie of atoms hopping around in a shallow lattice (in the classical regime, obviously). Very cool. Here is one still from the movie:

The Hamiltonian for this system is a hopping term and an interaction term which penalizes multi-site occupations: a basic Bose-Hubbard model. If the hopping terms dominate, then it’s a superfluid; if the on-site repulsion is dominant, we get a Mott insulator. They can get approximately unit site occupation number in the Mott insulator regime.
How to measure interesting quantities, like correlation functions? Measure the site occupation number (mod 2) on neighboring sites. One can push this further and measure string order parameters. In the future, the hope is to measure more complicated things like Chern number.
They observed the Lieb-Robinson light-cone of correlations.
Algorithmic cooling: Suppose that you have random site occupations. Then you can try to “cool” the system by pushing some of the entropy into a subsystem and get a pure state on the rest. They use Rydberg blockade interactions to try to smooth the site occupation number fluctuations, and then transfer to a superfluid. The temperature is not usually constant in their system, but the entropy is consistently low.
Quantum magnetism: Ising spin models. First consider a chain with on-site longitudinal and transverse fields. There are two phases: paramagnetic and antiferromagnetic. By putting a Mott insulator in an electric field, we can observe this type of transition. We can try to extend these results to two dimensional frustrated spin systems. For example, dimerized states (four-state clock model). These models have spin liquid-like groundstates.
Towards a universal quantum computer: The toolbox includes low entropy initialization, single site readout, single site control, and interactions, all with high fidelity. So what does the future hold?
Q: What types of error are these quantum simulators robust to?
A: Temperature is one type of error. Thermalization might not always occur. Sometimes you want some disorder, since real systems also have disorder.
Q: When you say “fidelity” what do you mean by that?
A: We mean the Hamming distance between classical basis states when we talk about paramagnetic order. That is, our fidelity is 99% if a fraction 99/100 of our spins are correctly aligned when we image the sample.
One question that I didn’t get to ask: what can they say about models which have degenerate ground states? I’m obviously thinking about e.g. the toric code and other topologically ordered models. Can they prepare specific pure states in the ground state manifold?

André Chailloux, Optimal Bounds for Quantum Bit Commitment. Joint work with Iordanis Kerenidis, based on 1102.1678.

Some basic primitives for secure transactions:

Unlike QKD, here there are only 2 players here who distrust each other.
Let Alice and Bob’s optimal cheating probabilities be $latex P_A^*$ and $latex P_B^*$.
The best known quantum protocol achieves
$latex max{P_A^*, P_B^*} =3/4$ [Ambainis ‘01]
and the best known lower bound is
$latex max{P_A^*, P_B^*} geq 1/sqrt{2}$ [Kitaev ‘03]
We improve the bound to $latex max{P_A^*, P_B^*} geq 0.739$ and build QBC protocols that get arbitrarily close to this bound.
The proof uses Mochon’s weak coin flipping protocol.

Gilles Brassard, Merkle Puzzles in a quantum world. Joint work with Peter Hoyer, Kassem Kalach, Marc Kaplan, Sophie Laplante, Louis Salvail, based on arXiv:1108.2316

Ralph Merkle was the inventor of Public Key Crypto in 1974 before Diffie and Hellman (but after the classified discovery). Gilles began with a great quote from Merkle:

The human mind treats a new idea the same way as the human body treats a new protein.

See Merkle’s web site for the amazing story of how he proposed public-key cryptography as a class project in grad school, and the professor described it as a “muddled terribly”, and then he submitted it to JACM, which rejected it because it was “outside of the mainstream of cryptographic thinking.”
Key Distribution problem as imagined by Merkle:
Alice and Bob talk over an authenticated public channel. After doing so Alice and Bob should get a shared secret. Can only be computationally secure, but back then it was not even thought to be computationally possible.
Make the eavesdropper’s effort grow as fast as possible compared to Alice and Bob’s effort
We will measure effort by query complexity to a random oracle. Use that model because it allows to prove a lower bound.
Merkle’s 1974 idea:
Choose a hard-to-invert, but injective, function f that acts on $latex [N^2]$.
Alice chooses N random values of x and computes the values of f(x).
Alice sends all these values to Bob.
Bob wants to invert one of them.
He keeps trying his own random x’s until he gets a collision; call it s.
Bob sends f(s) to Alice and Alice finds secret s.
So after N queries by Alice and N by Bob they get the same s
Eavesdropper has seen only N useless f(x) values and one useful f(s) which he has to try $latex N^2/2$ queries to find s, compared to N for Alice and Bob
Ok, so this is only a quadratic separation, but we don’t actually know that DIffie-Hellman is any better. The lower bound was proven by Narak and Mahmoody 08.
What about the quantum world, where the eavesdropper is quantum? Alice and Bob could be quantum but don’t need to be. The channel remains purely classical, so A&B can’t do QKD
If Eve is quantum, and Alice and Bob are classical, then Eve can find the key in O(N) time, which is the same amount of effort that Alice and Bob expend (albeit quantum and not classical). Unfortunately, this breaks the Merkle scheme completely.
It can it be repaired by allowing A&B to be quantum, or even keeping them classical
One option is to keep Alice classical, use a space of size $latex N^3$, and allow Bob to use BBHT to find one preimage in O(N) quantum queries. Eve using Grover needs $N^{3/2}$ so Alice and Bob get an advantage over Eve.
An improved protocol is for Bob to find two preimages in O(N) quantum queries.
Bob sends Alice the bitwise XOR of f(s) and f(s’). Given this Alice uses a table to find the secret.
This gives effort $latex O(N^{5/3})$ for Eve and O(N) for Alice and Bob. (It’s not trivial to show Eve can do this well, and even less to show this is optimal)
The proof involves something similar to Ambainis’ element distinctness algorithm with a quantum walk on the Johnson graph. A crucial observation relates to the compositon of our variant of element distinctness on N elements with searching each function value in a set of size $latex N^2$. This involves a new composition theorem for non-boolean functions.
What if Bob is classical?
Like before, there are two functions f and t, each with N^2 points.
Bob finds two preimages as in Merkle. The rest is the same: bitwise XOR.
How much time does quantum eavesdropper need? $latex N^{7/6}$.
Also, there is a sequence of protocols for which best attack we can find tends to $latex N^2$ in the limit. See paper for details. 🙂

Salman Beigi,
Simplified instantaneous non-local quantum computation with applications to position-based cryptography, joint work with Robert Koenig.

Position based crypto (ie authentication of position) by distance bounding techniques
Challenge response protocol
V1 ———-P—————–V2
Verifies send challenges timed for synch arrival at r0, position of verifier
honest prover computes and sends answers
Challenge/response susceptible to cheating by cheating provers each copying and sending the challenge. This shows why classical protocols are insecure.
Kent Munro Spiller 2011 with in- and security proofs
Cheating strategy uses teleportation
In general a quantum protocol is a bipartite unitary.
An attack is a combination of local operations, shared entanglement and one round of communication sufficient to simulate it.
Vaidman 03 achievse this with double exponential cost in ebits for recursive teleportation
Buhrman et al 09 key insights:
Simplified protocol uses $latex O(2^n)$ ebits
Uses port-based teleportation, due to Ishizaka and Hiroshima (PRL 2009 2009)
Share K maxly entangled states
Highly complicated measurement.
Recovery is very simple. Alice discards all but one of her halves of entangled states
New protocol for instantaneous measurementt
Position based crypto: cheating landscape
Vaidman -doubly exponential can always cheat
Port-based single exponential, suff for cheating
If restricted to classical communication and only linear entanglement can get exponential soundness for Position Based Crypto
If cheaters are allowed q commun and linear entanglement can get only constant

Florian Speelman, Garden Hose Game and Application to Position-Based Quantum Cryptography. Joint work with Harry Buhrman, Serge Fehr, Christian Schaffner, based on arXiv:1109.2563.

Position verification applications:

  • launching nuclear missiles (against enemies not equipped with FTL neutrinos, of course)
  • preventing prank calls of ambulances, SWAT teams or pizzas
  • communicating with right wifi router
  • computing international roaming fees properly
  • for when you want to know you are talking to S. Korea and not N. Korea
  • proving that “live”-blogging is actually occurring from the conference location, and is not just the result of typing out our previous opinions of the results.

On the positive side, can show security if promised there is no shared entanglement,
Show a specific class of schemes to obtain a tradeoff. Increased classical communication for honest players
Example: classically secure but broken by teleportation
Verifier 0 sends psi to Prover
Verifier 1 sends bit to prover
Prover routes psi left or right according to bit
Instead of one bit use a function
V0 sends state and n-bit string x to prover
V1 sends an n bit string y to prover
Prover computes fn f(x,y) and sends psi to V0 or V1 depending on outcome
Nice thing is that it’s no harder for honest prover
But for cheating provers, more entanglement is needed. Can do it with $latex 2times 2^n$ ebits
Garden hose model
A discrete way of looking at attacks of this sort
Alice and Bob share s pipes between them. Alice has a water tap
Alice use hoses like Enigma plug cords to connect their ends ends of hoses in some pattern
f(x,y) is whether water comes out on left or right
Gardenhose complexity of a function is number of pipes needed to compute the fn
Every piece of hose is a teleportation
Can use GH model to prove upper bounds
Equality 2n+O(log n)
Inner Product 4n
Majority n^2
If f is in logspace, then GH(f) is poly(n).
But there do exist fns with exponential GH complexity.
Can prove lower bounds for explicit functions, but best we can prove are linear.

Poster Session 1

The poster session had tons of great posters that we will not remotely do justice. Instead, here are some photos from a few of them. Try to guess which ones. 🙂


Quantum Information in Quantum Many-Body Physics — Live Blogging Day 4

The fourth and final day of the conference. Update Oct 26: The slides from the talk will be posted online here.

Quantum Information in Quantum Many-Body Physics

Roger Melko, Entanglement entropy in exotic phases of quantum matter.

Review of quantum Monte Carlo (QMC). Sign problem. Want to compute Rényi entanglement entropies for frustrated spin systems. He states that the record for Lanczos diagonalization for just the ground state is… 48 spins. (Wow! Presumably this is exploiting some specific symmetries, but still, that’s impressive.) Worried about “weak” critical points where the correlation length is merely larger than the maximum simulable system size, but doesn’t actually diverge in the thermodynamic limit. Works on trying to distinguish these possibilities. In systems with long-range entanglement or topological order, the entanglement scaling can give a picture “Landau-like” order parameter. Can also give a picture of “non-Landau” (deconfined) quantum critical points. Subleading order correction to the area law contains universal information. We will compute Rényi entanglement entropy. Introduce the valence bond basis for QMC sampling (Sandvik PRL 2005). In particular, use projector QMC. The estimator that they use for estimating expectation values is weighted by the action of a projector on a trial (valence bond) state. In this way, many quantities can be estimated such as energies, order parameters, real-space correlation functions, etc. In order to measure entanglement, we can use the swap test. We can use these techniques to study many models: Heisenberg (Néel ordered ground state), resonating valence bond spin liquids, J-J’ models (conventional quantum critical points), J-Q models (unconventional, deconfined quantum critical points). Showed some results for Heisenberg model on a torus. Stochastic series expansion: expand the finite-temperature partition function by expanding the exponential in its Taylor series. You get powers of $latex -beta H$, which you can sample in various ways and using various update rules. (Several buzzwords that I didn’t understand: worm algorithms, directed loop algorithms?) To access Rényi entropies, we can use the replica trick by gluing together several Riemann sheets (depending on which order of the Rényi entropy is desired). Can also compute (Rényi) mutual information. Want to study spin liquids. One possible model, kagome lattice Bose-Hubbard model supports a $latex mathbb{Z}_2$ quantum spin liquid. This measure is a loop gas similar to the toric code. When computing (Rényi) topological entanglement entropy there is a finite-size “chord length” term that doesn’t exactly cancel. Showed some numerics for this model.

Frank Verstraete, The entanglement spectrum of strongly correlated quantum many-body systems.

Began with an overview of what questions people care about with regard to entanglement spectrum. Area laws, universal terms, dimensional reduction and holography. Says the following (I’m paraphrasing except for the quotes): Since I was reluctant to prepare my talk, I’ve decided, “with a stroke of genius” to turn this talk around and ask the audience what they would like to hear about so that I don’t disappoint anyone. He discusses the notion of PEPS as dimensional reduction: we argue that we can map 2D quantum systems to 2D classical systems (the PEPS), even though this is potentially difficult (complexity-theoretic obstacles in general) and we know that 2D classical systems can be mapped to 1D quantum systems. But we know how to deal with 1D quantum systems if locality is maintained throughout this series of mappings. These methods should work for gapped systems, but will breakdown for critical systems. A long review of why we care about area laws with some examples and numerics for gapped and critical systems. We can only get area laws iff there is very little weight in the tail of the Schmidt spectrum. But you also need to show that cutting small Schmidt coefficients in one cut that you don’t truncate large Schmidt coefficients across another cut. “Because there is an isometry that maps virtual degrees of freedom to physical ones in an MPS, this proves that the virtual degrees of freedom are real. This is the main message of this talk.” (?) Symmetries present in the state are inherited in the tensor network description, for example in the AKLT spin chain. Review of cMPS. Showed some numerical calculations of the entanglement spectrum in the Lieb-Liniger model using cMPS. Even though this method is known to be critical we see fast decay of the Schmidt coefficients.

Stephan Inglis, Measuring Rényi entropies in finite-temperature QMC using Wang-Landau sampling.

Review of definition of Rényi entropy and why we care about it. Review of classical MC with Metropolis update rule. The Wang-Landau (WL) algorithm calculates the density of states instead. Assume an initial DOS, generate a new state and visit with probability $latex p$, then update the DOS by multiplying by some function $latex f$ so that $latex g_{i+1}(E) = g_i(E) times f$. The probability is $latex min(g(E_i)/g(E_f),1 )$ where $latex g(E_i)$ is the DOS at the $latex i$th time step. WL works by calculating the DOS as a function of the energy. In stochastic series expansion (SSE) quantum Monte Carlo (QMC), the energy of the system is related to the number of operators sampled, so a more natural way to parameterize things is in terms of $latex beta$ and a power series expansion. WL gives a much simpler way of computing Rényi entropies because there is no integration needed. At finite temperature, we can use mutual information instead. Showed a few examples. Now a discussion of the noise in WL. Is there a self-consistent way to check the noise? Check the first derivative of the DOS. The noise in the derivative seems to be uncorrelated, so we can just average several simulations to converge to accurate results. The advantages of WL: only one simulation is required rather than many, there is no explicit cumulative integration error and results can be improved building on old data rather easily. The difficulties are that a single simulation takes longer and it is harder to measure observables that are not related to energy.

Jutho Haegeman, Time-dependent variational principle for quantum many-body systems.

Time dependent methods for quantum lattice systems include time-evolving block decimation (Vidal). There are several disadvantages, however, such as the method breaks symmetries, the state temporarily leaves the variational manifold, the truncation is suboptimal for infinite-sized systems, and it is manifestly discrete in time. Can we implement this with cMPS? We have to use some results from Dirac. We project onto the tangent plane of the variational manifold at each point in time.  If the time-dependent evolution never leaves the manifold, then you get a non-linear differential equation within the manifold. This method inherently respects all symmetries and there is no Trotter error. It is globally optimal (spatially… it is still local in time) and there is no truncation. We will function on uniform MPS. We need to compute overlaps of tangent vectors, and this introduces a metric. We can simplify this through gauge fixing. MPS are gauge invariant in the sense that if we conjugate the bond indicies we get the same physical state. Taking tangent vectors, we get an additive correction which depends only on the choice of gauge, so we can pick the most convenient choice. Describes (pictorially) some gauge fixing conditions which make things particularly nice. Can use this for both imaginary and real time evolution. The method also provides additional checks on the convergence of the method by checking the length of the tangent vector. Showed some examples of numerics. Near a steady state solution, you can linearize the differential equation to simplify things. This leads to the linear (tangent) subspace around the stable point (Rayleigh-Ritz equations). Can study excitations with momentum $latex k$ by combining the Östlund-Rommer ansatz and the Feynman-Bijl ansatz (single-mode approximation). More numerics. Could also study domain walls with momentum $latex k$ using the Mandelstam ansatz. More numerics.

Ann Kallin, Scaling of entanglement entropy in the 2D Heisenberg ground state.

QMC in the valence bond basis for measuring the Rényi 2-entropy. Spin-1/2 antiferromagnetic Heisenberg on a square lattice with only nearest-neighbor interactions. Use the power method to get something close to the ground state. Showed some numerics… the method converges well on up to a 20 by 20 lattice. They do QMC sampling with importance weighting. The power of the Hamiltonian acting on the trial state can be written as a sum over products and this is what they sample. Then they use an update rule to switch the valence bonds to do the sampling. Computing the overlap between two valence bond states is easy, and just amounts to counting the number of loops in the resulting diagram when you superimpose both valence bond patterns on the same lattice. This is the “regular” VB QMC. There is also a “loop” version of the algorithm. Now you have two trial states and sandwich your $latex m$ different operators from the Hamiltonian. This method is similar to the SSE method discussed earlier in the conference, but with different boundary conditions. Then some numerics with 100 sites on a 1D Heisenberg chain. The loop algorithm converges quite quickly as long as the size of the region isn’t too large (as a fraction of the total system size). One further improvement is to use a ratio trick, namely, measure ratios of entropies for different region sizes. In 2D, they wanted to look at the effect of corners in the region on the entropy. Chose square-shaped regions and stripes around the torus. Plot scaling of the 2-entropy as a function of the subsystem size. They fit to a function of the form $latex a L+b log L + c$. Showed some nice pictures of “Rényibows” which simultaneously plot the entropy over several system sizes and subsystem sizes. (The name comes from the colors used in the plot.) One of the things they observed was a 1D CFT-like chord-length correction in 2D systems. Also saw some effects due to corners.

Johannes Wilms, Mutual information in the Lipkin-Meshkov-Glick model.

We will work with mutual information because it is a quantity which describes mixed states, and we will consider finite temperature. Define the LMG model: every spin interacts with every other spin ($latex XX$ coupling) together with a transverse field. This model was originally studied in the context of atomic nuclei, and more generally for shells of finite Fermi systems and metal clusters, He-3 droplets, “magnetic molecules”, also mappings to BEC and cavity QED. More recently people have loked at ground state and thermal properties. Shows the phase diagram, temperature versus magnetic field strength. We can compute many things in this model by exploiting symmetry. For example, total angular momentum is a good quantum number. The Hamiltonian decomposes into polynomial-sized blocks which have exponential multiplicity. The best way to study the problem is to use Clebsch-Gordan coefficients. Showed a plot of concurrence in the phase diagram… it doesn’t work so well. On the other hand, the mutual information perfectly tracks the phase transition. They looked at the scaling of the mutual information at criticality. They find that it scales linearly with the log of the system size with a prefactor of 1/4 in the finite temperature case (as opposed to 1/3 which is what you expect at zero temperature).

Winton Brown, Quantum Hammersley-Clifford theorem.

The classical HCT is a standard representation theorem for positive classical Markov networks. Recently, quantum Markov networks have been used in approximation methods for lower bounds on the free energy of quantum many-body systems. Defined conditional mutual information and the strong subadditivity inequality. The Markov condition in the classical case is given by $latex I(A:C|B) = 0$, which implies that $latex p_{ABC} = p_{A|B} p_B p_{C|B}$ where $latex p_{A|B} = p_{AB}/p_B$. A Markov network is a generalization of a Markov chain to an arbitrary graph $latex G$. The Markov condition just says that whenever I partition the graph into three sets of vertices $latex A, B, C$ with $latex B$ separating $latex A,C$, then $latex I(A:C|B) = 0$. The classical HCT says that any positive probability distribution factorizes over the cliques of $latex G$. The proof is simple. Take the log of the probability distribution. From the conditional independence, we find that $latex p$ is a sum of two terms that act trivially on either $latex A$ or $latex C$. … couldn’t get the rest down in time. For quantum states, we just use the quantum conditional mutual information. What quantum states can be supported in this framework? We immediately find that $latex rho = rho_{AB} rho_{BC}$ where the two factors commute. Then repeating the proof from the classical case runs into trouble because we don’t know if the various terms commute. Consider a graph which has no 3-cliques. On these graphs we have enough structure that each two-body operator has to be zero. This is almost enough, but some 1-body operators might remain. But it turns out this is not an obstacle. Unfortunately, there is no simple general result. A counterexample exists with a five-vertex graph with a single fully connected central node and a square of four nodes surrounding it. Now consider a connection to PEPS. We can construct a PEPS by applying a linear map to each site of virtual spins. If the linear map is unitary, then the PEPS built in this way are quantum Markov networks. Under what condition can we show the reverse implication? For a non-degenerate eigenstate of a quantum Markov network, the Markov properties imply an entanglement area law. Thus, an HC decomposition implies a PEPS representation of some fixed bond dimension. Open problem: show under what conditions quantum Markov networks which are pure states are have an HC decomposition. There exist non-factorizable pure state quantum Markov networks (simple example).

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 3

Day 3 of the conference.

Quantum Information in Quantum Many-Body Physics

Guifre Vidal, Criticality, impurities and real-space renormalization, joint work with Glen Evenbly.

First an overview of the multi-scale entanglement renormalization ansatz (MERA). Basic notions: isometries, disentanglers, causal cone. Bounded causal cone width means that it is possible to efficiently evaluate the expectation value of local observables. One can also course grain a local operator by applying one level of the MERA. (For ternary MERA, a two-body term gets mapped to a two-body term.) We will focus on critical systems, and then put in impurities. To enforce scale and translation invariance, we characterize the MERA by just a single pair of tensors $latex U$ and $latex W$ repeated across the whole network. MERA seems to be a good ansatz to describe critical systems where such invariance is manifest. MERA also can exhibit polynomial decay of correlations. Define the scaling superoperator. The eigenvalues and eigenvectors of this superoperator are the scaling operators and (the exponential of) the scaling dimensions of the underlying CFT. How does one choose the optimal tensors given a local Hamiltonian? He dodges the question… doesn’t want to get into such details in this talk. Showed an example for the 1D critical Ising model with transverse field. Now let’s consider an impurity at the origin. Now we lose translation invariance, but we still have scale invariance (conformal defect). The bulk scaling operators in the absence of an impurity has zero expectation. However, in the presence of an impurity the one-point functions decay like a power of the distance to the impurity. We need to add new scaling operators and new scaling dimensions. Translation invariance is broken, we can’t possibly optimize over an extensive number of tensors (because we could never reach the thermodynamic limit). Well, desperate times call for desperate measures. Introduce something called “Principle of minimal influence” in RG flow. PMI says: Given a MERA for the initial (pure) problem, we can obtain a MERA for the problem with impurity by then varying the tensors in the causal cone for the region containing the impurity. We don’t expect the principle to work in all cases, such as in systems with spontaneous symmetry breaking where the impurity restores the symmetry. We are still exploring the limits of applicability of this principle. If you split the isometries into two “half isometries”, you get that the total number of tensors you need to optimize is still $latex O(1)$. Why is this ansatz plausible? For one thing, we can show that the scaling superoperator now exhibits power low decay with respect to the distance from the impurity. The optimization goes as follows. First optimize with respect to the bulk Hamiltonian (no impurity). Next, use the bulk tensors to map the original impurity problem to an effective impurity problem. This gives a new Hamiltonian problem with exponentially decaying interaction strengths. Finally, we can reduce the optimization via “folding in half” to that of an MPS for an open boundary problem. This same form appears in Wilson’s solution of the Kondo problem. However, the impurity MERA can be used beyond the setting of free fermions, unlike Wilson’s original approach. Showed example of transverse-field Ising model with $latex XX$ impurity. This model allows an exact solution via CFT, and the numerics agree with the exact solution for various impurity strengths. We can extend this to more than one impurity by treating each impurity individually in their non-intersecting causal cones, and then once the causal cones intersect we do one more variation. We can also have an impurity that “cuts” the chain or a boundary between two half-chains coupled by the impurity, and even boundary conditions.

André-Marie Tremblay, Success and limitations of dynamical mean-field theory, joint work with Sordi, Sénéchal, Haule, Okamoto, Kyong and Civelli.

First recall how one “builds” a metal. We take a lattice and put orbitals on the sites. If the orbitals overlap, then it takes no energy to excite an electron. However, this doesn’t always work, e.g. in NiO. Recall the Hubbard model with on-site $latex U$ and hopping strength $latex t$ (and $latex t’$ for nearest diagonal neighbors, etc.) When $latex U=0$, we can Fourier transform and solve the model. When $latex t=0$, everything is localized, and there are $latex 2^n$ degenerate ground states at half filling corresponding to the different electron configurations with one electron per site. We can observe antiferromagnetism in the Hubbard model. We see a Mott transition in the region where $latex t$ and $latex U$ are of the same order, and at this point in the phase diagram neither the plane wave nor the localized wavefunctions gives a good description. One can write an effective model with Heisenberg coupling. We would like to measure Green’s functions for a single particle. Why do we care about Green’s functions? We can use them to compute the value of other observables and can measure it directly via photo emission. Fermi liquid theory: means that it is not too far from a free electron problem. We want to be able to solve in the presence of an impurity. Let’s review dynamical mean-field theory (DMFT). We begin working in infinite dimensions. Then the self-energy is a function only of the frequency. We want to compute the self-energy of the impurity problem. Then use that self-energy (which depends only on frequency) for the whole lattice. Now project the lattice onto a single site and adjust the bath so that the single site density of states obtained both ways are equal. There are some self-consistency equations. We can use a number of methods: cavity method, local nature of perturbation theory in infinite dimensions, expand around the atomic limit, effective medium theory, Potthoff self-energy functional. Use the universal self-energy functional method (see the talk by Sénéchal). Vary with respect to the parameters of the cluster (including Weiss fields). In this context, we can view DMFT as a stationary point in the variational space. What about the impurity solvers? The most trivial one is just exact diagonalization. Could also do Monte Carlo using an ergodic Markov chain. Showed a bunch of plots of these numerical methods for various models. The strengths of DMFT are: can obtain one-particle properties and phase diagrams; weaknesses include, order parameters are not coupled to observables that are quadratic in creation and annihilation operators, transport (four-point correlation functions), analytic continuation for QMC solvers. Future challenge is to find the optimum environment.

Matthias Troyer, Galois conjugates of topological phases, arXiv:1106.3267.

The question is: should we explore the computational power of non-unitary topological phases? Such phases appear through Galois conjugation of unitary topological phases. The arising models are non-Hermitian, but still have a purely real spectrum. Do they make sense? First, a review of topological phases and anyons (both Abelian and non-Abelian). Review of $latex mathrm{SU}(2)_k$ fusion rules. These states might appear in the Pfaffian state (Moore & Read) which has Ising anyons (level $latex k=2$) or in the parafermion state (Read & Rezayi) which has Fibonacci anyons (level $latex k=3$). How can we make more exotic states? We start with an anyonic liquid on a high-genus surface. Picture two sheets which are connected by fusing a bunch of perforations through both sheets. These create doubled (non-chiral) models. We can consider two types of quantities: flux through a hole and flux through a tube. We can build a family of basis states to describe the anyon wavefunctions by building a “skeleton” through each of the tubes. The basis states are labeled by string nets of particle labels on the skeleton lattice. Now we add the Hamiltonian for the model. We can either add flux through a tube or a hole, and each of these costs some different amount of energy. Each of these terms acts on either plaquettes or edges of the skeleton lattice. The plaquette term gives a ground state without flux through the plaquettes. This is like closing each of the holes: we get vacuum states of anyons on each of the two sheets. This is just the toric code or the Levin-Wen model. It’s a topological phase which is robust at zero-temperature to weak perturbations. This can be solved exactly, even for the excited states. Violations of the vertex constraint are chiral anyonic excitations living on the surface. Consider a finite density of anyons, and let’s pin the anyons into fixed positions. We would like to add a Heisenberg-like interaction Hamiltonian to this. We just introduce an energy difference between the various labels of the particle types. Review of the fusion basis for anyons, specialized to the Fibonacci case. Review of the F-moves ($latex 6j$ symbols, Clebsch-Gordan), the pentagon and hexagon equations. So what does a Heisenberg chain for anyons look like? $latex H = sum_i F_i Pi_i^0 F_i$ where $latex F_i$ is the F-move that exchanges two nearest-neighbors in a chain and $latex Pi_i^0$ is a projector onto a maximally entangled state. Ran numerical simulations to compute the gap… but it was gapless! The gap decreased like 1/L, so we guess it is a critical model. We can compute the central charge using the connection with entanglement entropy. It turns out that the model is exactly solvable by using the Temperley-Lieb algebra. They studied these interacting chains for various values of the level $latex k$. There are lots of equivalences known to CFTs depending on the central charge and the sign of the coupling (antiferromagnatic or ferromagnatic). Plot of the energy spectrum, with very good agreement between exact solution and numerics. When there are finite density interactions, there are gapless edge modes and a nucleated liquid. This foreshadows the situation in 2D. You get a nucleated liquid with a finite bulk gap and edge states. A novel quantum Hall liquid is nucleated inside the parent liquid. Now we move to the Galois conjugation. Review of Galois theory. Different roots of a polynomial are Galois conjugates of each other. The algebraic properties are the same, but the geometric properties are different. As an example, consider again the Fibonacci case. The elements of the F tensor element $latex F_1^{111}$ can be expressed in terms of the golden ratio $latex phi$, or in terms of $latex -1/phi$. However, this second one (Yang-Lee theory) is non-unitary. But if you look at the spectrum of the antiferromagnetic Yang-Lee model, you get a completely real spectrum! The central charge is negative (-3/5). The vacuum state is not the lowest energy state in this model. Similar conclusions for the ferromagnetic case. How does the entanglement entropy scale? It can’t still obey $latex c/3 log(L)$ since the central charge is negative, right? Let’s look at the doubled Yang-Lee model. We can redefine the inner product to get a self-adjoint Hamiltonian. (Bender et al., PT-symmetric quantum mechanics). We can solve this doubled YL model exactly, just like with the Levin-Wen models before. Now the code property that defines topological order has a “left-right” eigenvector version, since these are now distinct notions. There are several ideas one might have to turn it into a Hermitian model: Square the plaquette terms, or use just the projector onto the left or right eigenspaces. But these approaches don’t work at giving a topological phase which is robust… they are sensitive to local perturbations. “Hermitianizing” destroys the left-right code property. Might there by some local transformation to a Hermitian model? If not for Fibonacci, then maybe for some other non-unitary TQFT? Norbert: can you use a tensor network description of the ground state to make a Hermitian parent Hamiltonian? Which ground state do you choose? The left or the right? Main result: Galois conjugates of TQFTs where the braid group acts densely in $latex mathrm{SU}(2)$ cannot be realized as ground states of quasilocal Hermitian Hamiltonians. Non-Abelian topological phases offer universal quantum computation; the minimal models of CFT are Heisenberg models of non-Abelian anyons; Galois conjugation of topological phases gives intriguing non-unitary theories and non-Hermitian models; but non-unitary phases will not appear in physical systems. Tobias: can’t we add a single extra spin which couples to all the spins and make the model Hermitian?

Renato Renner, Aisenstadt Lecture, An information-theoretic view of thermalization.

Consider a very trivial system of a classical binary random variable $latex C$ (for classical) which takes the values 0 or 1 with equal probability. Now let’s introduce an observer $latex A$ who has some knowledge about the $latex C$. Suppose that $latex A=C$ with probability $latex 1-epsilon$ and $latex A=1-C$ with probability $latex epsilon$. Now when we condition the probability of $latex C$ on the state of $latex A$ we get a new distribution. A Bayesian would just call this an update. Suppose now that a new observer $latex Q$ has quantum information about the system. So now the conditional state of $latex Q$ is a quantum state. If we try to condition the state of $latex C$ on the quantum information $latex Q$, this becomes more interesting. For example, in quantum cryptography there is no reason to assume that the adversary is classical so it makes sense to consider this. Now let’s move on to thermodynamics. Consider a Szilard engine, i.e. a box with one particle in it and a piston that can compress to the middle of the box. If we let the “gas” expand adiabatically or compress it, which gives or takes work, then we find that the work satisfies $latex W = k T log 2$ to transition between the two states. From information theory, we know that a two-state system stores information. To erase information costs $latex k T log 2$ units of energy. Landauer showed that in fact this is optimal (Landauer’s principle), and this is independent of the physical representation of the bit. The argument is, if you could erase with a lower energy cost, then you could use the Szilard engine to extract work for free from heat (which violates the second law). If we have conditional knowledge about the state of the Szilard engine, then we can extract a different amount of work, namely $latex W = k T H(rho) log 2$ where $latex H$ is the binary entropy and $latex rho$ describes our (potentially quantum) knowledge about the state of the system. (Technical note: this is the von Neumann entropy in the asymptotic setting and the max entropy in the single-shot setting.) Consider a computer with some registers $latex R_1$ and $latex R_2$. I would like to erase them. If I erase them in sequence, then the total work required to do this would just sum the individual works using the previous formula. If instead, we did some joint operation we would only pay $latex propto H(rho_{12})$, which is in general less than the sum of the individual entropies. That is, in general we have $latex H(rho_{12}) le H(rho_1) + H(rho_2)$, so it makes sense to take advantage of the joint information. We could use instead a formula in terms of conditional entropies by just changing the formula in the obvious way. It turns out that this formula is the correct formula, $latex W = kTlog 2 H(S|mathrm{obs})$, where $latex S$ is the system and $latex mathrm{obs}$ represents the observer. If we take advantage of this conditional information, then we can serially achieve the same work as the optimal joint protocol. This is a manifestation of the so-called chain rule for entropy: $latex H(A|B) + H(B) = H(AB)$. Now let’s discuss thermalization. There is some large system (including an environment). If the total system is in a pure quantum state, how can it thermalize? What does that mean? We can look only at the subsystem, and there is a reduced density operator. Recall Alioscia’s talk for an example of some conditions for when the subsystem thermalizes. However, if two separate subsystems are separately thermal, that does not imply that the joint system is thermal. If instead we condition on the other subsystem instead, we could indeed reach this conclusion. As before, we cannot simply condition on quantum information, but the conditional entropies are well-defined. We can introduce the decoupling lemma, which guarantees that the joint state is nearly separable. It is a sufficient condition, phrased in terms of entropies conditioned on the observer, for decoupling. Namely, decoupling holds if we typically have $latex H(S|mathrm{obs}) ge 2k – n$ where the total system has $latex 2^n$ degrees of freedom and the system has $latex 2^k$. This is tight. If we apply this condition to thermalization. If the subsystem is sufficiently small, then the system will thermalize.

Tobias Osborne, The continuum limit of a quantum circuit: variational classes for quantum fields, arXiv:1102.5524 and arXiv:1005.1268, joint work with Cirac, Eisert, Haegeman, Verschelde, Verstraete.

Review of quantum fields. The most several ways one might “solve” a QFT: perturbation theory, Monte Carlo, and the variational methods. There were problems using the variational method in QFT, namely sensitivity to UV and poor reproduction of correlations. Review of MPS. Recall that MPS can be build from a staircase circuit. Review of area law result due to Hastings. As Tobias himself has proved, log-time evolution can be simulated efficiently, too. Also, a review of MERA. We can view it as a quantum circuit which takes ancillas and prepares a state. How can we pass to the continuum for these variational classes? We can use the quantum circuit preparation prescriptions of MPS and MERA and change the time step size for each gate to an infinitesimal quantity. That’s the main idea. Some nice features of cMPS: it is possible to analytically compute all the correlation functions. There is a built-in UV cutoff. There is generic clustering of correlations. Can build cMERA by alternating scaling transformations and local interactions for infinitesimal time steps, and take the limit. There is one main difference between MERA and cMERA: flexibility in the UV cutoff. The cMERA allow a smooth cutoff, but regular MERA only offer a lattice cutoff. These classes also obey entropy/area laws, rigorously in the case of cMPS, and there is a heuristic argument for cMERA. Gave an example for a simple bosonic system where the expressions are analytic for cMERA.

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 2

Day 2 of the conference. By the way, it is pretty difficult to get all this stuff written down correctly the first time through! So, if you see any mistakes please leave a comment so I can fix them.

Quantum Information in Quantum Many-Body Physics

Barbara Terhal, Adiabatic Quantum Computation and Stoquastic Hamiltonians, arXiv:0806.1746, joint work with Sergey Bravyi and quant-ph/0606140, joint with Bravyi, DiVincenzo and Oliveira.

Some quantum Hamiltonians have ground states which are amenable to Monte Carlo techniques because of the absence of the sign problem. We say a Hamiltonian is stoquastic iff, given some product basis, the off diagonal elements of each local term are real and non-positive. (One can apply local unitary operations to potentially transform some Hamiltonian into a stoquastic one.) Examples of stoquastic Hamiltonians: ferromagnets, quantum transverse-field Ising model, Jaynes-Cummings, Spin-boson and, importantly, Josephson-junction flux qubit Hamiltonians such as those engineered by D-wave for adiabatic quantum computation (AQC). Given a stoquastic Hamiltonian, we can define $latex G=I-tH$ for some sufficiently small $latex t$ and $latex G$ is a non-negative matrix. By the Perron-Frobenius theorem the largest eigenvalue eigenvector is a probability distribution. Therefore, we can potentially use quantum Monte Carlo (QMC) to sample from it. Theorem 1: Stoquastic frustration-free AQC can be efficiently simulated classically. Recall the matrix $latex G$ from before. We can sample the matrix elements of $latex G$ efficiently because the Hamiltonian is local. Since the Hamiltonian is frustration-free (FF), the ground state is annihilated term-by-term. We would like to set up a Markov chain which allows us to sample from the ground state. We can define a stochastic matrix from $latex G$ and the weights in the ground state $latex alpha_j$, namely $latex P_{jto i} = (D G D^{-1})_{ij} = frac{alpha_i}{alpha_j} G_{ij}$ where $latex D$ is the diagonal matrix with the ground state weights $latex alpha_i$ on the diagonal. This is a stochastic matrix. Because the Hamiltonian is FF, we can compute the ratio $latex frac{alpha_i}{alpha_j}$. This doesn’t seem to work in the non-FF case. Now if you iterate and there is a gap you will converge in polynomial time. In the frustrated case, we don’t know how to compute the ratio anymore. An alternative approach: we know the ground state is well-approximated by the Gibbs state at low temperature. The Gibbs matrix is non-negative if the Hamiltonian is stoquastic (but frustrated). The partition function is a sum over non-negative weights now. We can compute each of these weights efficiently, so now we want to estimate the sum with QMC. “World-line QMC.” You can use some kind of Metropolis rule for accepting and rejecting moves.

Spiros Michalakis, Stability of Frustration-free Hamiltonians, arXiv:1109.1588. Joint work with J. Pytel.

The challenge is to find a minimal set of assumptions under which gapped Hamiltonians have a stable spectral gap against weak local perturbations. Not every gapped system is stable. Consider a simple Ising model where the transverse field is a perturbation. This perturbation splits the degeneracy of the ground state. Another example is an Ising model with a defect at the origin. The lesson seems to be that local distinguishability leads to instability of the ground state degeneracy and/or the gap. Now define FF Hamiltonians (FFH). Every FFH on a lattice $latex Lambda$ is the extension of another FFH on some smaller ball $latex Bsubset Lambda$. The ground state projectors satisfy $latex P_B P_0 = P_0$ where $latex P_0$ is the global ground state projector. Now define topological quantum order (TQO). Roughly, no sufficiently local operator should be able to distinguish two orthogonal ground states. The toric code is an example of a model with TQO. But TQO is not enough for stability, as a simple counterexample shows (the “Ising toric code with defect”– it’s simpler than it sounds). Let’s define local TQO (LTQO), which says that the local distinguishability between various ground states as measured by the operator norm decays rapidly. This LTQO condition implies an area law (with a possible logarithmic correction, which is what we expect since we haven’t said anything about a spectral gap at this point). Next, we consider another condition called the local gap condition (LG). We want the gap of the local (restricted to some $latex B$) Hamiltonians have a gap which decays at most as an inverse polynomial in the size of $latex B$. Open problem: is LG always satisfied if the global Hamiltonian is a sum of local projections with FF and a spectral gap? The main theorem: If $latex H$ is a FFH satisfying LTQO and LG with sufficiently weak quasi-local perturbations, then the gap is stable and the ground state degeneracy doesn’t split too much. Gave a very brief overview of the proof. Some open problems: how can we prove spectral gaps for 2D FFH? What symmetries of the Hamiltonian imply LTQO? Can we prove LG for all FFH that are sums of local projectors?

Norbert Schuch, Complexity of commuting Hamiltonians for qubits on a square lattice, arXiv:1105.2843.

Commuting Hamiltonians can have highly entangled ground states (e.g. the toric code). They clearly have a constant spectral gap, and we only need to estimate the ground state energy to constant accuracy. Is the complexity of estimating the ground state energy easier than the full quantum problem? Main result: for qubits on a 2D square lattice, the problem of determining if the ground state is FF is in NP. First we need a structure theorem for commuting Hamiltonians due to Bravyi and Vyalyi (2003). This decomposition was used to show that 2-body commuting Hamiltonian problem is in NP. The decomposition does not generally extend to $latex k$-body for $latex kge 3$ and for qudits. However, if the connectivity is “sparse enough” then the decomposition works. Idea number one: split the lattice into “white” and “black” squares and use the fact that each layer individually can take advantage of the BV decomposition. But we don’t know if these two decompositions are compatible, so we need a new idea. We can tell if the FF ground state exists if we can find states in each layer that have non-trivial overlap… but how can we compute the overlap? We need to take advantage of the structure. The point is that for qubits the overlap can be efficiently computed. When the spins are qubits are only two possibilities for the BV decomposition. One possibility is trivial, whereas the second possibility doesn’t allow any branching structures to form, so the overlap can be computed efficiently. This proof does not provide a constructive procedure for finding the ground state.

Maarten Van den Nest, A monomial matrix formalism to describe quantum many-body states, arXiv:1108.0531.

The motivation is to generalize the Pauli stabilizer formalism in a way which still retains some of the nice properties and captures a large class of states. Some of the nice properties include: efficient description of quantum states via the stabilizers, understanding of properties of the state via manipulation of the stabilizers. We first define a monomial unitary matrix which is a permutation matrix times a diagonal matrix of phases. An M-state is one which is stabilized by (+1 eigenstate of) a set of monomial unitaries. We typically want to restrict to special cases, such as $latex k$-local, etc. This class of states encompasses e.g. stabilizer states for qudits, all quantum double ground states, AKLT model, W-states, Dicke states, coherent probabilistic computations, coset states of Abelian groups, and so on. We define the monomial stabilizer group $latex mathcal{G}$ to be the group generated by the unitaries. Assume this is finite. There is another group called the permutation group $latex mathcal{P}$ which comes from “forgetting” the phase structure and keeping only the permutation piece. We define the orbits by the orbit of the computational basis states under the action of the operators in $latex mathcal{P}$. Some simple theorems: Thm 1, all amplitudes of an M-state are zero outside the orbit. Thm 2, all nonzero amplitudes have equal modulus. If you have an entire M-space, then you can find a basis for the space where each basis state is a uniform-amplitude superposition over a single orbit. The orbits are disjoint, so the dimension of the space is bounded by the total number of orbits. Computing some properties such as sampling the state in the computational basis are NP-hard. However, there are many examples that allow an efficient classical simulation.

Hector Bombin, Structure of 2D topological stabilizer codes, arXiv:1107.2707, joint work with D. Poulin and G. Duclos-Cianci.

Topological order can be thought of as equivalence classes of many-body systems under the action of constant-depth local unitary quantum circuits. We say that a state has short range entanglement if such a circuit can map the state to a product state. Other states are said to have long range entanglement. (Wen, Gu, Chen 2009) We will consider topological stabilizer codes (TSC). These are a subset of the stabilizer code Hamiltonians, in which the local terms are stabilizer code generators. The goal of this talk is two-fold: show rigorously that local equivalence “works” as a way of classifying gapped phases and find the general structure of 2D TSC including subsystem codes. This allows us to extend which are well-known for specific models (such as RG error correction for the toric code) to other models. Let’s recall what defines a theory of anyons: topological charges (particle labels), fusion rules and braiding rules. In 2D, string operators play an important role in these models. In fact, we can recover the data specifying the anyon theory from the action of the string operators. In this talk, we are only considering qubit Hamiltonians. We only consider Hamiltonians on an infinite lattice which are translationally invariant. Then TSC are ones where the centralizer of the stabilizer (in the group of Paulis with finite support) is again the stabilizer. TSC with the same anyon model are equivalent. Since we are going to show that everything is equivalent to some number of copies of the toric code, we can get a complete invariant by considering only the topological entanglement entropy. A sketch of the proof:

  1. Topological stabilizer groups admit local, translationally invariant independent generators (only possible in 2D)
  2. there are only a finite number of global constraints
  3. the number of global constraints equals the number of charges in the anyon theory
  4. the charges are topological, so the have string operators, which means we can extract an anyon model
  5. we can rule out chirality because the Hamiltonian is commuting (there can be no energy transport along the edge, as can be seen by considering evolution in the Heisenberg picture)
  6. build a framework for string segments as in multiple copies of the toric code
  7. other stabilizers (non-plaquette) have no charge
  8. map string segments to string segments and uncharged stabilizers to single qubit stabilizers
  9. ???

(Hector, if you’re reading this, fill in #9 in the comments. You went waaaay too fast!) Now let’s move on to subsystem codes. There are some complications since there is now also a gauge group as well as a stabilizer group. Gauge charges map naturally to stabilizer charges. Although they are not generally isomorphic, these two groups have the same order. In fact, these two charge groups are naturally dual. When all is said and done, every possible anyon model on qubits is a combination of: the toric code (2 bosons, 1 fermion, semionic statistics); TSCC (3 fermions, semionic interactions, chiral); subsystem toric code (1 boson); honeycomb subsystem code (1 fermion).

Quantum Information in Quantum Many-Body Physics — Live Blogging Day 1

For the rest of the week I’m at the very interesting conference in Montreal on quantum information and quantum many-body physics. I’ll be live blogging the event. Enjoy!

Quantum Information in Quantum Many-Body Physics

Alioscia Hamma, Entanglement in Physical States, arXiv:1109.4391. Joint work with P. Zanardi and S. Santhra.

Some motivation: studying the foundations of statistical mechanics using quantum information. Want to investigate the role of entanglement in stat mech. Goal of this talk is to revise some of the recent work by Winter et al. which studies stat mech using Haar-random states and instead consider some subset of just the “physical” states, e.g. states which are subject to a local Hamiltonian evolution. Thermalization in a closed quantum system is a vexing philosophical problem. Let’s focus on entanglement; now if we look locally at the system we see a mixed state even if the global state is pure. Can we explain thermodynamics via this entanglement entropy? For a Haar-random pure state, the reduced density operator is almost maximally mixed. Now let’s define a global Hamiltonian with a weak coupling to a bath. Then the reduced state is close to the Gibbs state. However, just by a simple counting argument (Poulin, Quarry, Somma, Verstraete 2011) most physical states (i.e. ones subject to evolution with respect to a local Hamiltonian) are far from typical Haar- random states. What Alioscia et al. want to do is start from completely separable pure states and evolve with respect to a local Hamiltonian for a polynomial amount of time. Now study the entanglement in such a system. We start with a probability measure [latex]p[/latex] on the set of subsets of qudits. Now sample a subset and evolve for an infinitesimal amount of time. Now take a new iid sample and evolve again. Look at the reduced density operator and compute the purity, [latex]mathrm{Tr}(rho^2)[/latex]. Now let’s compute the mean and variance of the purity. To do this, use a superoperator formalism. Use a so-called random edge model. If you put the system on a lattice, you find that the average purity decays with an area law scaling. Next, consider a linear chain. A chain of length L cut into two pieces A and B. Instead of choosing edges from a random edge model, use a random local unitary on each site chosen in a random order (it turns out this order doesn’t matter much). Now repeatedly apply these random local unitaries to each site on the chain. The purity decays exponentially in the number of time steps. The average Rényi 2-entropy is nearly [latex]k log d – log 2[/latex] where [latex]k[/latex] is the number time steps and [latex]d[/latex] is the local site dimension. If we look at the [latex]ktoinfty[/latex] limit we get the maximally mixed state. They were not able to compute the gap of this superoperator yet, so we don’t know the rate of convergence. When the number of time steps is constant, we have an area law. Working on 2D and 3D lattices now. An open questions is how to take an energy constraint into account so that one might be able to recover a Gibbs state. Question period: David Poulin suggests keeping the unitary according to a Metropolis-Hastings rule.

Sergey Bravyi, Memory Time of 3D spin Hamiltonians with topological order, Phys. Rev. Lett. 107 150504 (2011); arXiv:1105.4159, joint work with Jeongwan Haah.

Want to store quantum information in a quantum many-body system for a long time. In a self-correcting memory, we want the natural physics of the system to do error correction for us. An example of a classical self-correcting memory: 2D ferromagnetic Ising model. There is a macroscopic energy barrier which prevents transitions between the ground states. It’s no good as a quantum memory because different ground states can be distinguished locally. (We need topological order.) Begin with an encoding of the information, i.e. an initialization of the system. To model the evolution, we use Metropolis dynamics. If a spin flip would decrease the energy, then flip it; otherwise, flip it with probability [latex]exp(-beta Delta E) [/latex]. If we are below the critical temperature, then the lifetime of the information is long, otherwise it is exponentially short. Now go to the quantum case. We will consider 2D and 3D quantum codes. We consider only those with Hamiltonians where the local operators are supported on a unit cell (locality). All the stabilizers must commute and must be a tensor product of Pauli operators. The ground state is frustration-free. Initial state of the quantum memory is some ground state. Now we consider dynamics. Restrict to Markovian dynamics, i.e. a master equation in Lindblad form. We will restrict the jump operators to be local with bounded operator norm, and the total number of jumps should be only of order [latex]n[/latex] (the number of spins). We require that the fixed point of the master equation is the Gibbs state and also detailed balance. Define the Liouville inner product [latex]langle X , Y rangle_beta = mathrm{Tr}( exp(-beta H) X^dagger Y )[/latex]. Detailed balance amounts to [latex]langle X , mathcal{L}(Y) rangle_beta = langle mathcal{L}(X) , Y rangle_beta[/latex]. As for decoding, we measure the syndrome and correct the errors. We measure the eigenvalue of each of the stabilizer operators. Whenever we record a -1 eigenvalue, we call that a defect. The decoder’s goal is trying to annihilate the defects in the way which is most likely to return the memory to its original state. Let’s ignore the computational complexity of the decoding algorithm for the moment. Then a good decoder is a minimum energy decoder, a la Dennis, Kitaev, Landahl, Preskill. (Why not consider free-energy minimizer instead?) Rather than use this decoder, we use the RG decoder of Harrington (PhD thesis) and Duclos-Cianci and Poulin (2009). To do this, identify connected clusters of errors and find the minimum enclosing box which encloses the errors. Then try to erase the cluster by finding minimum energy error within the boxes. Some of the boxes will be corrected, some will not. Now coarse-grain by a factor of two and try again. Stop when there are no defects left. Now apply the product of all the recorded erasing operators. If it returns the system to the ground state, declare success. Declare failure if the size increases to the linear size of the lattice and there are still some defects left. Now let’s define the energy barrier so that a high energy barrier gives high storage fidelity. A stabilizer Hamiltonian has energy barrier [latex]B[/latex] iff any error path of local operators mapping a ground state to an orthogonal ground state has energy cost at least [latex]B[/latex]. Lemma 1, the trace norm distance between the output of the minimum energy decoder and the original ground state is upper bounded by [latex] lVert Phi_{mathrm{ec}}(rho(t)) – rho(0)rVert le O(tN) sum_{mge B} {N choose m} exp(-beta m) ,.[/latex] How large does the energy barrier need to be to get self-correcting properties? Clearly a constant [latex]B[/latex] is insufficient. What about [latex]B sim log N[/latex]? Plug this into the pervious bound and try to pick the system size (for a fixed temperature) that minimizes the RHS. Choose the system size to scale like [latex]N_beta sim exp{beta/2}[/latex] and you get that the storage infidelity decreases like [latex]exp{beta^2}[/latex]. This is a “marginal” self-correcting memory. There is a size of the system which is “too big” beyond which you lose the self-correction properties, but for the right size you get good properties. Contrast this with systems that have logical string-like operators: applying a string-like operator to the vacuum creates a pair of defects at the end points of a string. Then the energy barrier is [latex]O(1)[/latex] and the system can’t be self correcting. The memory time is [latex]O(exp(beta))[/latex]. All 2D stabilizer Hamiltonians have these string-like logical operators. In 3D, all previous attempts had string- like excitations as well. Recent results by Haah arXiv:1101.1962 provably have no string-like logical operators in 3D. Suppose that we consider a stabilizer Hamiltonian that has topological quantum order (TQO) but does not have logical string-like operators. What can we say about its energy barrier? What can we say about its memory time? We need to define “TQO” and “string-like”. For simplicity, consider only translationally invariant Hamiltonians. Definition: A finite cluster of defects C is called neutral iff C can be created from the vacuum by acting on a finite number of qubits. Otherwise C is called charged. For example, a single semi-infinite string would be a charged cluster. Definition: TQO iff 1) orthogonal ground states are indistinguishable on scale [latex]ll L[/latex]; 2) Any neutral cluster of defects C can be created from the vacuum by acting on qubits inside the smallest enclosing box [latex]b(C)[/latex] and its constant neighborhood. Now define string-like. Consider a Pauli operator P. Suppose it creates defects only inside two boxes A1 and A2. Then a logical string segment P is called trivial iff both anchors are topologically neutral. Trivial string segments can be localized by multiplying them with stabilizers. These aren’t actually strings. The aspect ration [latex]alpha[/latex] is the ratio of the distance between the anchors divided by the size of the anchor regions. The no-strings rule: there exists a constant [latex]alpha[/latex] such that any logical string segment with aspect ratio greater than [latex]alpha[/latex] is trivial. Main result, theorem 1, Any sequence of local Pauli errors mapping a ground state to an orthogonal ground state must cross a logarithmic energy barrier. The constant implicit in front of the logarithmic scaling depends only the aspect ratio (no-strings constant) and the local site dimension. (The original Haah code has [latex]alpha = 15[/latex].) This bound is tight and gives the [latex]exp(beta^2)[/latex] scaling for the minimum energy decoder. The next result gives a similar result for the more practical RG decoder. Sketch of the proof of theorem 1. No- strings rule implies that local errors can move charged clusters of defects only a little bit. Dynamics of neutral clusters is irrelevant as long as such clusters remain isolated. The no- strings rule is scale invariant, so use RG approach to analyze the energy barrier. Define a notion of “sparse” and “dense” syndrome. Lemma: dense syndromes are expensive in terms of energy cost. Any particular error path will have some sparse and some dense moments as you follow the path. Now do some RG flow. Keep only the dense points in the path. This sparsifies some of the dense ones at the next level of the RG hierarchy. Repeat. Use the no- strings rule to help localize errors. In conclusion: All stabilizer Hamiltonians in 2D local geometries have string-like logical operators. The energy barrier does not grow with the lattice size. Some 3D Hamiltonians have no string-like logical operators. Their energy barrier grows logarithmically with the size of the lattice. For small temperature [latex]T[/latex], the memory time grows exponentially with [latex]1/T^2[/latex]. The optimal lattice size grows exponentially with [latex]1/T[/latex].

Tzu-Chieh Wei, Measurement-based Quantum Computation with Thermal States and Always-on Interactions, Phys. Rev. Lett. 107 060501 (2011); arXiv:1102.5153. Joint work with Li, Browne, Kwek & Raussendorf.

Can we do MBQC with a gapped Hamiltonian where [latex]Deltabeta ll 1[/latex] where [latex]Delta[/latex] is the energy gap? Consider the toy model of the tri-valent AKLT state on a honeycomb lattice. Start with a bunch of spin-1/2 singlets on the edges and project onto the space with zero angular momentum at each vertex. If we consider the toy model of a single unit cell, then we see some features: the model is gapped (of course) and the Hamiltonian evolution is periodic (of course). So, if we perform operations at regular intervals, perhaps we can get MBQC to work if [latex]beta Delta ll 1[/latex]. We need to reduce the dimensionality of the central spin from 4 states to 2 states. We can use projection filtering to get this down to a GHZ state along a randomly chosen quantization axis (either of [latex]x,y,z[/latex]). Distill a cluster state by measuring the POVM on the center particles. Then measure the bond particles. We can extend to 3D as well. This deterministically distills a 3D cluster state. As for fault-tolerance, you can use the Raussendorf-Harrington-Goyal scheme.

Anne E. B. Nielsen, Infinite-dimensional matrix product states, arXiv:1109.5470, joint work with Cirac and Sierra.

Recall the definition of an iMPS, we take the [latex]Dtoinfty[/latex] limit of a standard matrix product state. Let’s consider iMPS constructed from conformal fields. Motivation: want to identify families of states useful for investigating particular problems in quantum many-body systems. Why [latex]Dtoinfty[/latex] limit, and why conformal fields? Want to describe critical systems. Entanglement entropy is not limited by the area law. Power law decay of spin-spin correlation fucnctions. Long-range interactions are possible. Mathematical tools from CFT are useful. Sometimes it is possible to derive analytical results. Let’s restrict to a special model, [latex]mathrm{SU}(2)_k[/latex] WZW model. There are some chiral primal fields [latex]phi_{j,m}(z)[/latex], and spin values range in [latex]j in 0,1/2,ldots,k/2[/latex] and [latex]m[/latex] is the z-component. There is a closed form of the wavefunction for [latex]k=1, j=1/2[/latex] and where the [latex]z_i[/latex] are uniform on the unit circle in the complex plane. Can compute the Rényi 2-entropy, the two-point spin correlation function. We can also derive a parent Hamiltonian for this model. We use two properties: the null field and the Ward identity. This Hamiltonian is non-local but 2-body. We can recover the Haldane-Shastry model in the uniform case. They computed the two- and four-point correlation functions.

David Sénéchal, The variational cluster method.

Ultimate goal: solve the Hubbard model, i.e. the simplest model of interacting electrons that you can write down. In particular, we are interested in the Green’s functions, single particle properties and the susceptibilities (2-particle properties). We base the cluster method on a tiling of the lattice (i.e. the superlattice). We solve for local information first in each block and then try to restore the connections between the blocks. We can rewrite this using a Fourier transform and parameterize in terms of two wave vectors: a discrete cluster wave vector (telling you which cluster you’re in) and a reduced wave vector (continuous). Now we decompose the Hamiltonian into a cluster term plus a perturbation which has inter-cluster hopping terms. Then we want to treat this at lowest order in perturbation theory. The approximate Green’s function at this level of approximation is very simple: take the cluster Green function and subtract the perturbation to get the total Green’s function. The lattice self-energy is approximately the cluster self-energy. Note that this method breaks translation invariance. But this can be “restored” by completing the Fourier transform (i.e. Fourier transform over the lattice sites). Then we can “periodize” the Green’s function. (This seems to be the right way to do it, rather than doing this to the self-energy, at least at half-filling where we have an exact solution for 1D.) Cluster perturbation theory (CPT) is exact when [latex]U=0[/latex] and when [latex]t=0[/latex], and also gives exact short-range correlations. It allows a continuum of wavevectors. However, it doesn’t allow long-range order of broken symmetry. The approximation is controlled by the size of the cluster, and finite-size effects are important. There is no self-consistency condition (unlike dynamical mean-field theory). One idea is to try to capture a broken symmetry in CPT is to add a Weiss field. To set the relative weight of this term you would need some kind of principle, such as energy minimization. Instead, we use an idea due to Potthoff (M. Potthoff, Eur. Phys. J. B 32 429 (2003)). We define a functional of the Green’s function with the property that varying with respect to G gives the self-energy, and add some terms so that we get the Dyson equation. The value at the stationary point is the grand potential. There are now several approximation schemes: we could approximate the Euler equation (the equation for the stationary solution); we could approximate the functional (Hartree-Fock, etc.); we could restrict the variational space but keep the functional exact. We focus on the third method (following Potthoff). Potthoff suggested using the self-energy instead of the Green’s function and used the Legendre transform. This functional is universal in the sense that its functional form only depends on the interaction part. We can introduce a reference system which differs from the original Hamiltonian by one-body terms only. Suppose that we can solve this new Hamiltonian exactly. Then at the physical self-energy we can compute the value of the grand potential. Thus, we can find the dependence of the grand potential on the Green’s function. Showed some examples with Néel antiferromagnetism, superconductivity and a dynamical example studying the Mott transition.

Guillaume Duclos-Cianci, Local equivalence of topological order: Kitaev’s code and color codes, arXiv:1103.4606, joint work with H. Bombin and D. Poulin.

Focus on the toric code to be concrete. The main result is the following theorem: All 2D topological stabilizer codes are equivalent, meaning, there exists a local unitary mapping to a certain number of copies of Kitaev’s toric code. First review the toric code and the fact that the excitations are mutually bosonic with semionic exchange statistics and the notion of topological charge (i.e. equivalence classes of excitation configurations.) Now consider the decoding problem: given a configuration of defects, how can we decide which equivalence class we started in? Now define color codes. Gave a lot of details of the mapping taking the color code to two copies of the toric code, but I would really need some pictures to give an adequate description. Showed some results on the threshold for decoding using the mapping.

Philippe Corboz, Recent progress in the simulation of strongly correlated systems in two dimensions with tensor network algorithms, several papers, joint work with Bauer, Troyer, Mila, Vidal, White, Läuchli & Penc.

Want to use tensor networks to study strongly correlated systems. Typically, we use quantum Monte Carlo (QMC), but this fails for fermionic or frustrated systems because of the sign problem, so it is important to look for alternatives. To model fermions with a tensor network, we need to take the exchange statistics into account, and this can be done. Consider the t-J model: does the tensor network approach reproduce the “striped” states observed in some cuprates? DMRG (wide ladders) say YES, while variational and fixed-node Monte Carlo say NO. What does iPEPS say? It says stripes! Another example: the [latex]mathrm{SU}(N)[/latex] Heisenberg models on a 2D square lattice. For $latex N=2$ we know there is Néel order. The iPEPS result reproduces known results from QMC in this case; a good start. There are problems fitting as you increase the bond dimension $latex D$ and it is difficult to extrapolate. For $latex N=3$ and $latex N=4$ the sign problem means you can’t use QMC. Focus on the case $latex N=4$. Here they find a new ground state which has considerably lower energy than previous variational results. The new ground state exhibits dimerization, and the color variation across each dimer exhibits Néel order. (“Dimer-Néel order”) Here the colors just refer to the 4 different degrees of freedom. This seems to be the predicted ground state as opposed to the plaquette state predicted by linear flavor-wave theory.