Sergey opened his talk with an image of a bagel in a Solfatara landscape, meant to evoke topogically-protected qubits (get it? a bagel?) in a thermal (steam) bath.

The question is whether we can engineer an energy landscape with high barriers between encoded states so as to enable passive quantum error correction; i.e. to create a topologically protected qubit. Here are the main ingredients:

- Stabilizer codes on lattice with local generators and code distance $latex dapprox L$, where $latex L$ is the lattice size.
- A code Hamiltonian with energy penalty for states outside logical subspace. The Hamiltonian is defined simply by having a violated stabilizer cost unit energy.
- Thermal noise: should be described by a Markovian master equation with local jump operators. The spectral density should obey detailed balance; this will be the place where temperature enters.
- Finally, there should be a decoder. Here we consider the RG (renormalization group) decoder.

Kitaev’s toric code (1997) was the first example of a topological QECC. If you’re not familiar with it, then you should read about that instead of the current results. The key idea is that the errors are strings that wrap around the torus, and so have length at least $latex L$. However, since a partial string incurs only a constant energy penalty, the *memory time* (=amount of time until the state degrades by a constant amount) is $latex e^{-2beta}$ [arXiv:0810.4584] independent of the lattice size! This is pretty disappointing; we can get this performance by *not encoding the qubit at all*.

Indeed, *any* 2D topological stabilizer code has constant energy barrier [0810.1983], which implies that relaxation to thermal equilibrium occurs at a rate independent of system size $latex L$.

This no-go result is daunting, but in higher dimensions we can still achieve some nontrivial topological protection. Here’s what we want: A topological stabilizer codes in a D-dimensional array with commuting Pauli stabilizers $latex G_a$, each with support of size O(1), and with each qubit acted on by O(1) stabilizers. The ground space of the corresponding Hamiltonian is the set of states that are +1 eigenvectors of every stabilizer. We would like the distance to grow at least linearly with lattice size; such models have intrinsic robustness.

Fortunately, there is a shining example of a spatially local stabilizer code with all the self-correcting properties we want. Unfortunately, it requires D=4; it is the 4-d toric code. Here is its energy landscape.

Defects are loops around error membranes, and any sequence of single-qubit Pauli errors mapping a ground state to an orthogonal one must create roughly $latex L$ defects at some step. Thus the memory time is exponential in $latex L$ for low $latex T$ [0811.0033].

All 3D stabilizer Hamiltonians studied so far have *constant* energy barrier. Indeed this is true of all 3D codes that are invariant under translation and “scale invariance” [1103.1885]. (Here, scale invariance means that the ground state degeneracy doesn’t depend on the size of the lattice.) Haah’s breakthrough 1101.1962 is the first example of a 3-d stabilizer code without string-like logical operators. In his model, defects cannot move very far.

Our results give self-correcting properties of any topological stabilizer codes that obey the no-strings rule, e.g. Haah’s codes.

- Their energy barrier is $latex Omega(log L)$
- They have partial self correction. Specifically, the Arrhenius law together with the logarithmic energy barrier gives lifetime $latex L^{cbeta}$, for any inverse temperature $latex beta>0$, and for $latex Lleq L^*$ for some critical value $latex L^*sim e^{beta/3}$. See figure:

This lifetime reaches a maximum value (as a function of L) when L is proportional to $latex e^{beta/3}$. Thus, choosing L optimally results in a memory lifetime of $latex T_{mathrm{mem}} sim e^{cbeta^3/3}$.

One example of a code with no string-like errors is the 3-d cubic code of Haah. It is obtained by placing two qubits at each site of a cubic lattice and tiling the following stabilizer generators. See figure (from the Bravyi-Haah paper).

**The noise model:** One can get a Markovian noise model by considering the Davies weak coupling limit, where the Lindblad operator $latex A_{k,omega}$ transfers energy $latex omega$ from the system to the bath. The spectral density $latex r(omega)$ obeys detailed balance: $latex r(omega) = exp(-beta omega) r(omega)$. Note that this is the only place where temperature enters.

**The decoder:** After measuring the syndrome, the correction operator is found using ideas from Jim Harrington’s thesis [2004; I can’t find it online] and the Renormalization Group decoder from 1006.1362. See photo.

The strategy is to:

- Find connected clusters
- For each connected cluster C find the min enclosing box and try to annihilate C by an operator acting within the box.
- Stop if no defects left.
- If some defects are left, then increase the length by factor of 2 and repeat.

The RG decoder stops when all defects have been annihilated or when length reaches the lattice size, in which case we have failure. We can also have failure when all defects have been annihilated but the system has been returned to a wrong ground state.

The RG decoder can be implemented in time poly(L) (and can be parallelized to run in time log(L)).

We can define the memory time as follows. We time evolve according to the master equation given the local quantum jump Lindblad operators. After some time $latex t$, we run the decoder. Then we check the worst-case (over all initial states) of the trace distance between the initial and final state.

$latex | mathcal{D}(rho(t)) – rho(0) |_1 le O(t) N 2^k exp(-beta m) (1+exp(-beta))^N.$

Here N is the number of physical qubits, k is the number of logical qubits, t is the time, $latex beta$ is inverse temperature and m is the number of errors that can be corrected by the decoder (or more precisely, the height of the energy barrier.) Essentially, there is a temperature/energy factor that competes with an entropic factor.

Analyzing it we find that the memory time grows greater than $latex L^(c beta)$ as long as $latex L le exp(beta/3)$

This gives large but finite lower bound on memory time at a fixed temperature.

**Is it optimal?** Maybe the analysis is pessimistic and the Haah code actually has an exponentially large lifetime? Maybe growing L can result in unbounded memory time when $latex beta$ is sufficiently large? The authors carried out a Monte Carlo simulation of Haah code with only X errors to test these theories. It used 1000 days of CPU time on IBM’s Blue Gene.

The plots are qualitatively similar to the lower bounds on memory time. In particular, they show that the maximum memory time scales quadratically with $latex beta$ and that the memory time for fixed temperature increases with $latex L$ and then starts to decrease.

**How do the proofs go?**

Idea 1: The no-strings rule implies localization of errors. That is, any error E can be written as $latex E=E_{loc} cdot G$ where G is a stabilizer and $latex E_{loc}$ is supported on at most 2 neighborhoods.

In order for the accumulated error to have large weight at least one intermediate syndrome must be nonsparse, with one pair of defects within distance a of each other.

Idea 2: Uses scale invariance of the nostrings rule.

Define sparseness and denseness at different scales. A syndrome which is dense at p consecutive scales will include a cluster of p defects.

Show that to implement a logical operation, at least one intermediate syndrome must be dense at roughly log(L) differnet scales.

Consider the problem of an *interference channel*, with two senders and two receivers. This problem is hard enough in the “ccqq” case, meaning that the channel takes two classical inputs to a bipartite quantum output: i.e. $latex x_1,x_2 rightarrow rho_{x_1,x_2}^{B_1B_2}$. This is depicted here (figure taken from their paper).

Along the way to understanding this, we’ll need to consider a multiple-access channel, with two senders and one receiver. The achievable rate region is in a sense an optimistic one, bounded by the obvious constraints:

$latex R_1 leq I(X_1;B|X_2)$

$latex R_2 leq I(X_1;B|X_2)$

$latex R_1 + R_2 leq I(X_1X_2;B)$

Why are these obvious? The first two constraints are what we would obtain if the receiver has successfully decoded one message, and both parties get to use that information to design a joint encoder-decoder pair. The last constraint is what we would get if the two senders could work together, apart from the requirement that their average input to the channel should be product across the two senders. Thus, these are natural upper buonds. The nonobvious part is that this can be *achieved*.

The coding theorem was obtained by Andreas Winter, when he was a grad student (quant-ph/9807019). How does it work? First, we observe that the desired rate region can be obtained as a convex combination of the “corner points” of the rate region: $latex (I(X_1;B), I(X_2;B|X_1))$ and $latex (I(X_1;B|X_2), I(X_2;B))$.

To achieve one of these corner points, use the fact that the two senders are sending uncorrelated inputs, so we can average over the inputs of one sender (say $latex X_2$) to obtain an effective single-sender/single-receiver channel, for which coding is possible at rate $Iatex I(X_1;B)$. Of course, measurement is potentially destructive, but since we are operating in the regime of very low error, we can use the fact that measurements with nearly certain outcomes cause very little damage (the infamous “Tender Measurement Lemma”). (Ok, it’s mostly called the “gentle measurement lemma” but I like “tender.”) Thus, the decoder obtains $latex X_1$ and, conditioned on it, can decode $latex X_2$. (Clearly the sender knows $latex X_1$ as well. Note that this is no longer true for the multiple-sender case.)

Sequential Decoding:. To do decoding, we need to use the typical projector for the average state, as well as the conditionally typical projectors for codewords. The HSW idea is to use the pretty good measurement. The idea of sequential decoding is, in analogy with the classical idea, is to check each codeword sequentially. It works out pretty similarly, with the measurement operators in both cases being lightly distorted versions of the conditionally typical projectors.

The crucial lemma making this possible is from 1109.0802, by Pranab Sen. He calls it a “noncommutative union bound”. The statement is that

$latex 1- mathrm{tr}(Pi_1cdotsPi_n rho) le 2 sqrt{sum_{i=1}^n mathrm{tr}((1-Pi_i) rho)}$

Successive decoding: In general, we’d like to analyze these problems of multipartite information theory using the traditional single sender-singler receiver tools, like HSW. Since each individual code works with exponentially small error, and the gentle-measurement Lemma states that decoding causes exponentially small damage, we should be able to compose several protocols without much trouble. The only subtlety is that the typical projectors don’t commute, which is where the noncommutative union bound comes in. We apply it to the following “intersection projectors.”

$latex tildePi_{x_1^n, x_2^n} leq Pi Pi_{x_1^n} Pi_{x_1^n, x_2^n} Pi_{x_1^n} Pi$

Most important open problem: find a general three-sender quantum simultaneous decoder. Solving this should hopefully yield the insights required to handle an unlimited number of senders.

Compression and transmission: the source coding and noisy coding theorems of Shannon. The fundamental limit on compression is the entropy.

Lossless compression is often too stringent a condition. For example, consider jpg compression, which gives faithful images (to a human eye) despite throwing away lots of bits of information. We consider Rate Distortion Theory, i.e. the theory of lossy data compression. We are interested in the maximum rate of compression given a fixed maximum amount of distortion. Define the rate distortion function:

$latex R_c(D) =$ minimum classical asymptotic cost for sending many copies of state $rho$ with per-copy distortion $latex leq D$, where $latex c$ is for classical.

For a fixed value of the distortion function $latex 0le D < 1$, we work in the storage setting. Define a quantum rate distortion function in the asymptotic limit.

Barnum (in quant-ph/9806065) proved the first lower bound on the quantum rate distortion function:

$latex R_q(D) ge min_{S_rho(N,D)} I_c(rho,N)$

where the term on the right is the coherent information and $latex S_rho(N,D) = {N:mathrm{CPTP} : d(rho,N)le D}.$ But, the coherent information can be negative, so it can’t be tight lower bound in all cases.

Now move to the communication setting. Can define the entanglement-assisted quantum rate distortion function. They can find a single-letter formula for it, given by $latex min_{S_rho(N,D)} frac12 I(R:B)_omega$. (In terms of the quantum mutual information.) This is both a lower bound and it is achievable by using quantum state splitting. Alternatively, the achievability follows from the quantum reverse Shannon theorem (QRST).

Unassisted quantum rate distortion function can also be found by the using the QRST. Need to use a regularized formula.

The title on arxiv.org is “Gaussian bosonic synergy: quantum communication via realistic channels of zero quantum capacity”. Realistic? Synergy?? Think about this, kids, before you go off to work in industrial research.

This paper concerns the fascinating topic of channels with zero quantum capacity. For zero classical capacity, these channels are simple to describe. Here is one example.

But zero quantum capacity occurs even for channels whose output depends on the input. For example, consider the completely dephasing channel, aka a classical channel. Obviously this has no classical capacity. The fact that this channel has zero capacity is both because it is anti-degradable (meaning that Eve’s output can simulate Bob’s; this implies 0-capacity by no-cloning) *and* because it is PPT, meaning it maps every input state to a PPT output state, or equivalently, its Jamiolkowski state (obtained by feeding in half of a maximally entangled state) is PPT. Sadly, these two conditions are still pretty much our only examples of zero-capacity channels. See a previous talk of Graeme’s for a framework that could potentially expand this set of conditions.

Let’s talk a bit more about those two examples. The anti-degradable channels are intuitively those that give more to Eve than Bob. So erasure channels with erasure rate >50% count, attenuation channels (for photons; can be modeled by a beamsplitter) and certain depolarizing channels (using an argument due to Brandao, Oppenheim, and Strelchuk http://arxiv.org/abs/1107.4385). On the other hand, PPT channels are at least easy to calculate, and include some channels with zero private capacity.

The general question of characterizing zero-capacity channels is a very interesting one, and one that it’s not clear if we are up to. But here is a nice specific version of the problem. The capacity of the depolarizing channel drops to zero at noise rate $latex p^*$, where $latex 0.2552 leq p^* leq 1/3$. What is $latex p^*$???

A good quote from Jon.

Superactivation demonstrates that asking about the capacity of a quantum channel is like asking about a person’s IQ: one number isn’t enough to capture everything you might want to know.

The main result: There exist Gaussian channels, each with zero quantum capacity, that can be combined to achieve nonzero capacity. These channels are more or less within the reach of current experiments (not sure about all the decoding), requiring about 10dB of squeezing and about 60 photons/channel. This is interesting in part because Gaussian channels seemed to be so well-behaved! For example, there is no NPT bound entanglement in Gaussian channels.

Infinite dimensions: This paper works in infinite dimensions, which many in our field are inherently uncomfortable with, and others are inexplicably drawn to. Like representation theory, or complexity classes with lots of boldface capital letters, the presence of phrases like “Quasi-free maps on the CCR algebra” can be either a good or a bad thing, depending on your perspective.

Q: For those of us who of prefer finite dimensions, can we truncate the Hilbert space, perhaps by taking advantage of a constraint on the total number of photons?

A: In principle yes, but then the channels are no longer Gaussian, so our analysis doesn’t work, and in general, Gaussian things are easier to analyze, so that is a sense in which infinite dimensions can actually be simpler to deal with.

The story behind this work starts with the Birkhoff-von Neumann theorem (often called Birkhoff’s theorem) which states that doubly stochastic matrices (matrices with nonnegative entries and all rows and columns summing to one) are convex combinations of permutations. An analogous claim for quantum channels is that unital channels (i.e. mapping the maximally mixed state to itself) are mixtures of unitaries. However, this claim is false. More recently, Smolin, Verstraete and Winter conjectured in quant-ph/0505038 that many copes of a unital channel should asymptotically approach convex combinations of unitaries. (The rest of their paper contained evidence suggestive of this claim, and in general has nice results that are an important precursor of the merging revolution in quantum information theory.) This conjecture was called the Asymptotic Quantum Birkhoff Conjecture (AQBC) and was discussed on Werner’s open problems page. A few years later, it was shown to hold for everyone’s favorite unital-but-not-mixture-of-unitaries channel (What’s that? The one with Jamiolkowski state equal to the maximally mixed state on the antisymmetic subspace of $latex mathbb{C}^3 otimes mathbb{C}^3$ of course!) by Mendl and Wolf.

Sadly, the AQBC is also false, as Haagerup and Musat recently proved. However, their paper uses von Neumann algebras, and their abstact begins

We study factorization and dilation properties of Markov maps between von Neumann algebras equipped with normal faithful states, i.e., completely positive unital maps which preserve the given states and also intertwine their automorphism groups.

…

(Another approach, in preparation, is due to Ostrev, Oza and Shor.)

This raises a new open question: is there a nice simple proof that finite-dimension-loving quantum information people can follow without having to learn any new math? (Note: probably we *should* learn more about von Neumann algebras. Just like we should make our own pizza from scratch. But if there’s a good pizzeria that delivers, I’d still like to hear about it.)

This result delivers, by disproving the AQBC with an elegant, intuitive proof. The key technical lemma is the kind of tool that I can imagine being useful elsewhere. It states that, for the problem of distinguishing two convex sets of quantum channels, there exists a single measurement strategy that works well in the worst case. This builds on a similar lemma (due to Gutoski-Watrous ‘04 and Jain ‘05) is just minimax applied to state discrimination: If we want to distinguish two convex sets of density matrices, then there exist a single measurement that works well in the worst case. How well? Its performance is at least as good as what we get by choosing the worst pair of states from the two convex sets, and then choosing the optimal measurement based on these.

This paper extends this result to sets of quantum channels. The optimal distinguishability of a pair of quantum channels is given by their diamond-norm distance, which is simply the largest trace distance possible between their outputs when applied to one half of some (possibly entangled) state. So when you want to distinguish *convex sets* of channels, then they use minimax again to show that there exists a measurement strategy (this time consisting both of an input state to prepare AND a measurement on the output) that works well for any worst-case choice of input channels. This set of measurement strategies sounds potentially non-convex; however, Watrous’s SDP characterization of the diamond norm shows that measurement strategies form a convex set, so everything works out.

Next, this paper applies this to find ways to estimate the distance between a unital channel and the convex hull of unitary channels. To do this, they use a type of channel which they call a Schur channel (more commonly known as Schur multipliers): given a matrix $latex A$, define the channel $latex Phi_A(X) = A circ X$. Such channels are called Schur channels.

Thm:TFAE:

- $latex Phi_A$ is a Schur channel
- $latex Phi_A$ is unital with diagonal Kraus operators
- A is positive and $latex a_{k,k}=1$ for each $latex 1leq kleq d$.

Using Schur channels, and their discrimination lemma, they are able to make short work of the AQBC. Their next result is that any Schur channel for which conv(Kraus operators of the channel) doesn’t include any unitary is a counterexample to AQBC. This will follow from the fact that the closest random-unitary channel to a Schur channel can be taken (up to factor of 2 in the distance) to be a mixture of diagonal unitaries.

This work doesn’t tell us *all* of the AQBC-violating channels, but it does describe a large class of them.

They also mentioned connections to Grothendieck’s inequality (work in progress) and Connes’ embedding problem. Connections to these two topics were also mentioned in Haagerup and Musat’s work.

__Markus Greiner__, Quantum Magnetism with Ultracold Atoms – A Microscopic View of Artificial Quantum Matter.

Some motivation: many condensed matter models can’t be tested with current experiments. Even in simple models it is difficult to calculate basic quantities. Want to build new materials. The idea: use ultracold atoms in optical lattices to simulate condensed matter models. The dream of a quantum simulator is to build a special purpose device which is still useful before full-scale quantum computers are built.

He mentioned that quantum simulators are robust to errors… I believe that this is an open question which theorists should be working on.

How do we cool the atoms? First trap the atoms with lasers. For weakly interacting quantum gases, we can form a BEC where all of the particles can be well described by a single wavefunction. But for strongly correlated quantum gases there are interactions which preclude such a description. In an optical lattice, we have atoms interacting strongly on lattice sites. There is new physics here: for example, superfluid to Mott insulator transition. We can use the optical lattice to make synthetic matter: we can simulate electrons in a crystal. With fermionic atoms, we can observe a fermionic superfluid and things like the BEC-BCS crossover. Bose-Hubbard models and high-T_c superconductivity could also be probed in these systems. Also, quantum magnets, low-dimensional materials, etc.

Let’s focus on quantum magnetism. The main experimental tool is the quantum gas microscope. The goal is to take “bottom-up” tools like high-fidelity readout and local control from e.g. ion traps to more “top-down” systems like optical lattices. (This is clearly related to scalability of quantum computation, ultimately.) The quantum gas microscope can image florescence from single atoms trapped in the lattice. To image this, they measure parity.

He then showed a movie of atoms hopping around in a shallow lattice (in the classical regime, obviously). Very cool. Here is one still from the movie:

The Hamiltonian for this system is a hopping term and an interaction term which penalizes multi-site occupations: a basic Bose-Hubbard model. If the hopping terms dominate, then it’s a superfluid; if the on-site repulsion is dominant, we get a Mott insulator. They can get approximately unit site occupation number in the Mott insulator regime.

How to measure interesting quantities, like correlation functions? Measure the site occupation number (mod 2) on neighboring sites. One can push this further and measure string order parameters. In the future, the hope is to measure more complicated things like Chern number.

They observed the Lieb-Robinson light-cone of correlations.

Algorithmic cooling: Suppose that you have random site occupations. Then you can try to “cool” the system by pushing some of the entropy into a subsystem and get a pure state on the rest. They use Rydberg blockade interactions to try to smooth the site occupation number fluctuations, and then transfer to a superfluid. The temperature is not usually constant in their system, but the entropy is consistently low.

Quantum magnetism: Ising spin models. First consider a chain with on-site longitudinal and transverse fields. There are two phases: paramagnetic and antiferromagnetic. By putting a Mott insulator in an electric field, we can observe this type of transition. We can try to extend these results to two dimensional frustrated spin systems. For example, dimerized states (four-state clock model). These models have spin liquid-like groundstates.

Towards a universal quantum computer: The toolbox includes low entropy initialization, single site readout, single site control, and interactions, all with high fidelity. So what does the future hold?

Q: What types of error are these quantum simulators robust to?

A: Temperature is one type of error. Thermalization might not always occur. Sometimes you want some disorder, since real systems also have disorder.

Q: When you say “fidelity” what do you mean by that?

A: We mean the Hamming distance between classical basis states when we talk about paramagnetic order. That is, our fidelity is 99% if a fraction 99/100 of our spins are correctly aligned when we image the sample.

One question that I didn’t get to ask: what can they say about models which have degenerate ground states? I’m obviously thinking about e.g. the toric code and other topologically ordered models. Can they prepare specific pure states in the ground state manifold?

Some basic primitives for secure transactions:

Unlike QKD, here there are only 2 players here who distrust each other.

Let Alice and Bob’s optimal cheating probabilities be $latex P_A^*$ and $latex P_B^*$.

The best known quantum protocol achieves

$latex max{P_A^*, P_B^*} =3/4$ [Ambainis ‘01]

and the best known lower bound is

$latex max{P_A^*, P_B^*} geq 1/sqrt{2}$ [Kitaev ‘03]

We improve the bound to $latex max{P_A^*, P_B^*} geq 0.739$ and build QBC protocols that get arbitrarily close to this bound.

The proof uses Mochon’s weak coin flipping protocol.

*Gilles Brassard, Merkle Puzzles in a quantum world.* Joint work with Peter Hoyer, Kassem Kalach, Marc Kaplan, Sophie Laplante, Louis Salvail, based on arXiv:1108.2316

Ralph Merkle was the inventor of Public Key Crypto in 1974 before Diffie and Hellman (but after the classified discovery). Gilles began with a great quote from Merkle:

The human mind treats a new idea the same way as the human body treats a new protein.

See Merkle’s web site for the amazing story of how he proposed public-key cryptography as a class project in grad school, and the professor described it as a “muddled terribly”, and then he submitted it to JACM, which rejected it because it was “outside of the mainstream of cryptographic thinking.”

Key Distribution problem as imagined by Merkle:

Alice and Bob talk over an authenticated public channel. After doing so Alice and Bob should get a shared secret. Can only be computationally secure, but back then it was not even thought to be computationally possible.

Make the eavesdropper’s effort grow as fast as possible compared to Alice and Bob’s effort

We will measure effort by query complexity to a random oracle. Use that model because it allows to prove a lower bound.

Merkle’s 1974 idea:

Choose a hard-to-invert, but injective, function f that acts on $latex [N^2]$.

Alice chooses N random values of x and computes the values of f(x).

Alice sends all these values to Bob.

Bob wants to invert one of them.

He keeps trying his own random x’s until he gets a collision; call it s.

Bob sends f(s) to Alice and Alice finds secret s.

So after N queries by Alice and N by Bob they get the same s

Eavesdropper has seen only N useless f(x) values and one useful f(s) which he has to try $latex N^2/2$ queries to find s, compared to N for Alice and Bob

Ok, so this is only a quadratic separation, but we don’t actually know that DIffie-Hellman is any better. The lower bound was proven by Narak and Mahmoody 08.

What about the quantum world, where the eavesdropper is quantum? Alice and Bob could be quantum but don’t need to be. The channel remains purely classical, so A&B can’t do QKD

If Eve is quantum, and Alice and Bob are classical, then Eve can find the key in O(N) time, which is the same amount of effort that Alice and Bob expend (albeit quantum and not classical). Unfortunately, this breaks the Merkle scheme completely.

It can it be repaired by allowing A&B to be quantum, or even keeping them classical

One option is to keep Alice classical, use a space of size $latex N^3$, and allow Bob to use BBHT to find one preimage in O(N) quantum queries. Eve using Grover needs $N^{3/2}$ so Alice and Bob get an advantage over Eve.

An improved protocol is for Bob to find two preimages in O(N) quantum queries.

Bob sends Alice the bitwise XOR of f(s) and f(s’). Given this Alice uses a table to find the secret.

This gives effort $latex O(N^{5/3})$ for Eve and O(N) for Alice and Bob. (It’s not trivial to show Eve can do this well, and even less to show this is optimal)

The proof involves something similar to Ambainis’ element distinctness algorithm with a quantum walk on the Johnson graph. A crucial observation relates to the compositon of our variant of element distinctness on N elements with searching each function value in a set of size $latex N^2$. This involves a new composition theorem for non-boolean functions.

What if Bob is classical?

Like before, there are two functions f and t, each with N^2 points.

Bob finds two preimages as in Merkle. The rest is the same: bitwise XOR.

How much time does quantum eavesdropper need? $latex N^{7/6}$.

Also, there is a sequence of protocols for which best attack we can find tends to $latex N^2$ in the limit. See paper for details. 🙂

Position based crypto (ie authentication of position) by distance bounding techniques

Challenge response protocol

V1 ———-P—————–V2

Verifies send challenges timed for synch arrival at r0, position of verifier

honest prover computes and sends answers

Challenge/response susceptible to cheating by cheating provers each copying and sending the challenge. This shows why classical protocols are insecure.

Kent Munro Spiller 2011 with in- and security proofs

Cheating strategy uses teleportation

In general a quantum protocol is a bipartite unitary.

An attack is a combination of local operations, shared entanglement and one round of communication sufficient to simulate it.

Vaidman 03 achievse this with double exponential cost in ebits for recursive teleportation

Buhrman et al 09 key insights:

Simplified protocol uses $latex O(2^n)$ ebits

Uses port-based teleportation, due to Ishizaka and Hiroshima (PRL 2009 2009)

Share K maxly entangled states

Highly complicated measurement.

Recovery is very simple. Alice discards all but one of her halves of entangled states

New protocol for instantaneous measurementt

Position based crypto: cheating landscape

Vaidman -doubly exponential can always cheat

Port-based single exponential, suff for cheating

If restricted to classical communication and only linear entanglement can get exponential soundness for Position Based Crypto

If cheaters are allowed q commun and linear entanglement can get only constant

__Florian Speelman__, Garden Hose Game and Application to Position-Based Quantum Cryptography. Joint work with Harry Buhrman, Serge Fehr, Christian Schaffner, based on arXiv:1109.2563.

Position verification applications:

- launching nuclear missiles (against enemies not equipped with FTL neutrinos, of course)
- preventing prank calls of ambulances, SWAT teams or pizzas
- communicating with right wifi router
- computing international roaming fees properly
- for when you want to know you are talking to S. Korea and not N. Korea
- proving that “live”-blogging is actually occurring from the conference location, and is not just the result of typing out our previous opinions of the results.

On the positive side, can show security if promised there is no shared entanglement,

Show a specific class of schemes to obtain a tradeoff. Increased classical communication for honest players

Example: classically secure but broken by teleportation

Verifier 0 sends psi to Prover

Verifier 1 sends bit to prover

Prover routes psi left or right according to bit

Instead of one bit use a function

V0 sends state and n-bit string x to prover

V1 sends an n bit string y to prover

Prover computes fn f(x,y) and sends psi to V0 or V1 depending on outcome

Nice thing is that it’s no harder for honest prover

But for cheating provers, more entanglement is needed. Can do it with $latex 2times 2^n$ ebits

Garden hose model

A discrete way of looking at attacks of this sort

Alice and Bob share s pipes between them. Alice has a water tap

Alice use hoses like Enigma plug cords to connect their ends ends of hoses in some pattern

f(x,y) is whether water comes out on left or right

Gardenhose complexity of a function is number of pipes needed to compute the fn

Every piece of hose is a teleportation

Can use GH model to prove upper bounds

Equality 2n+O(log n)

Inner Product 4n

Majority n^2

If f is in logspace, then GH(f) is poly(n).

But there do exist fns with exponential GH complexity.

Can prove lower bounds for explicit functions, but best we can prove are linear.

## Poster Session 1

The poster session had tons of great posters that we will not remotely do justice. Instead, here are some photos from a few of them. Try to guess which ones. 🙂