Three quantum physicists show why no interpretation is the worst.
Subscribe
Quantum Pontificem
The Quantum Cardinals
Quantum Announcements

Recent Posts
API is an abbreviation that stands for “Application Program Interface.” Roughly speaking an API is a specification of a software component in terms of the operations one can perform with that component. For example, a common kind of an API is the set of methods supported by a encapsulated bit of code a.k.a. a library (for example, a library could have the purpose of “drawing pretty stuff on the screen”, the API is then the set of commands like “draw a rectangle”, and specify how you pass parameters to this method, how rectangles overlay on each other, etc.) Importantly the API is supposed to specify how the library functions, but does this in a way that is independent of the inner workings of the library (though this wall is often broken in practice). Another common API is found when a service exposes remote calls that can be made to manipulate and perform operations on that service. For example, Twitter supports an API for reading and writing twitter data. This later example, of a service exposing a set of calls that can manipulate the data stored on a remote server, is particularly powerful, because it allows one to gain access to data through simple access to a communication network. (As an interesting aside, see this rant for why APIs are likely key to some of Amazon’s success.)
As you might guess, (see for example my latest flop Should Papers Have Unit Tests?), I like smooshing together disparate concepts and seeing what comes out the other side. When thinking about APIs then led me to consider the question “What if Papers had APIs”?
In normal settings academic papers are considered to be relatively static objects. Sure papers on the arXiv, for example, have versions (some more than others!) And there are efforts like Living Reviews in Relativity, where review articles are updated by the authors. But in general papers exist, as fixed “complete” works. In programming terms we would say that are “immutable”. So if we consider the question of exposing an API for papers, one might think that this might just be a read only API. And indeed this form of API exists for many journals, and also for the arXiv. These forms of “paper APIs” allow one to read information, mostly metadata, about a paper.
But what about a paper API that allows mutation? At first glance this heresy is rather disturbing: allowing calls from outside of a paper to change the content of the paper seems dangerous. It also isn’t clear what benefit could come from this. With, I think, one exception. Citations are the currency of academia (last I checked they were still, however not fungible with bitcoins). But citations really only go in one direction (with exceptions for simultaneous works): you cite a paper whose work you build upon (or whose work you demonstrate is wrong, etc). What if a paper exposed a reverse citation index. That is, if I put my paper on the arXiv, and then, when you write your paper showing how my paper is horribly wrong, you can make a call to my paper’s api that mutates my paper and adds to it links to your paper. Of course, this seems crazy: what is to stop rampant back spamming of citations, especially by *ahem* cranks? Here it seems that one could implement a simple approval system for the receiving paper. If this were done on some common system, then you could expose the mutated paper either A) with approved mutations or B) with unapproved mutations (or one could go ‘social’ on this problem and allow voting on the changes).
What benefit would such a system confer? In some ways it would make more accessible something that we all use: the “cited by” index of services like Google Scholar. One difference is that it could be possible to be more precise in the reverse citation: for example while Scholar provides a list of relevant papers, if the API could expose the ability to add links to specific locations in a paper, one could arguably get better reverse citations (because, frankly, the weakness of the cited by indices is their lack of specificity).
What else might a paper API expose? I’m not convinced this isn’t an interesting question to ponder. Thanks for reading another wacko mashup episode of the Quantum Pontiff!
Talks from QIP 2015 are now available on this YouTube channel. Great to see! I’m still amazed by the wondrous technology that allows me to watch talks given on the other side of the world, at my own leisure, on such wonderful quantum esoterica.
Here also are links to the Pontifical liveblogging and the QIP schedule of talks.
A QIP 2015 epilogue: our notes from the business meeting. See also this post by Kaushik Seshadreesan.
Finance:
$193,545 – $191,467 = $2,478 profit!
registration income: $185,340
refunds, about $3,000
external sponsorships: $30,450, and another $5k due later
total income before tax: $212,900
after tax: $193,545
Expenses:
tutorial: $5,296
main program: $47,941
banquet: $120*270 = $32,400
admin: $10k
travel support for about 41 junior researchers: $34k+
invited speakers: $45k estimated
rump session: $10,600 estimated
best student paper prize: $700
other/misc: $5k
total: $191,467
Registration:
total: 276
in 2014: 261 (early before 31 oct, 169; standard by 30 nov, 68; late by 31 dec, 29)
in 2015: 15 (onsite 10)
noshow: 10
It’s great that the budget was balanced to about 1%. However, what to do about the little bit of extra money? This is a perpetual problem. Runyao had a nice simple idea: just send it to next year’s QIP and use it for travel support for junior researchers.
197 talkorposter submissions (1 withdrawn) (In Barcelona, there were 222 but this decrease probably reflects the distance to Sydney rather than anything about the field.)
3 PC members for each submission, and 25 submissions per PC member.
3 weeks of refereeing, 2 weeks of discussion.
Much faster than a typical theoretical CS conference
39 accepts, including 2 mergers 20% accept
SC invited 3 more speakers 40 talks in the program
6 of these recommended to SC for plenary status
one best student paper
There were 601 reviews, at least 3 per submission
There were 142 external reviewers and 220 external reviews.
In the first round there were 102 posters accepted. 5 posteronly submissions, all rejected talkorposter submissions
92 more posters 90 accepted… one out of scope and one wrong.
About 40 people withdrew their posters or simply didn’t put up a poster.
We could have accepted about 2030 more good papers. Future choice: accept more papers? This implies parallel sessions (if we decide to accept all of those goodenoughforQIP papers). There are pros and cons of this. Pro: more people will be happy, and better representation of research. The Con is that the community will be more split, the conference needs two mediumsize lecture rooms (but what about the plenary talks?).
Anecdotal feedback from authors: some reviews were sloppy. On the other hand, with only 3 weeks of refereeing we cannot expect too much. CS reviewers are more detailed and more critical.
Do we want the 3page abstract format? There was not much discussion on this point, but Ronald de Wolf said that the costs outweigh the benefits in his opinion. We don’t have strong opinions. Steve likes them but thinks maybe two pages would be enough. Aram thinks we could do without them, or could go to 4 pages so the physicists could use their PRL and the computer scientists could use the first 4 pages of their long version. Apparently no one in the audience had strong opinions on this either, since there was no discussion of this point. Hopefully the next PC chair at least thinks carefully about this instead of going with the default.
Do we want to have the abstracts on the website? Again, there was no discussion of this point, but RdW thinks this is generally a good idea (and us Pontiffs agree with him).
Should we make the reviews public (subject to caveats)? E.g., something like what was done for TQC 2014, where lightly edited reviews were posted on SciRate. The answer is obviously yes. đ We made a case for partial open reviewing, and the slides are here. The “partial” here is important. I think a lot of people have misinterpreted our proposal and counterproposed a compromise in which only edited summaries of reviews of reports are posted for accepted papers; this is funny because it is essentially what we did for TQC 2014! It is true that in implementing this the details are extremely important, including instructions to the PC & subreviewers and the explanations of the system to authors and the public (e.g. crucially including the fact that published reviews are not meant to explain acceptance/rejection or even to label as “good” or “bad” but rather to add perspective). Probably these points should be in a longer post.
QIP 2016 will be held in Banff, with Barry Sanders chairing the local organizing committee.
Bids for QIP 2017 are being put in by ZĂźrich and Seattle with local organizing committee chairs of Rotem Arnon Friedman and Krysta Svore respectively. (I mention chairs because my understanding is that no matter how large the committee is, they do a fraction, or even a fraction, of the total work.) A straw poll of attendees shows slight favor for ZĂźrich. Krysta said that MSR would probably still be interested in hosting in 2018, when the geographic case for Seattle would be stronger. Neither place will be as glorious as Sydney in January, but Seattle winters are pretty mild (although gray).
Stephanie Wehner presented the results of a poll that showed support for parallel sessions (about 50% of respondents favored this over options like dropping plenary talks, dropping the free afternoon, shorter talks or doing nothing). Others, like Daniel Gottesman, complained that the poll seemed to be asking how to increase the number of talks, rather than whether we should. A show of hands at this point (from an audience that by now had become pretty small, perhaps in part because there was free beer at the rump session at this point) showed an audience roughly evenly divided between supporting and opposing an increase in the number of talks. The Trinity of Pontiffs are divided on the parallel question, but of course it doesn’t have to be all or nothing. We might try an experiment doing parallel talks on one day (or even halfday) out of five, so we can see how we like it.
Today’s highlight: an algorithm running in time , also known as “polynomial time” or “efficient”.
Thomas begins by reminding everyone about what NP is and mentions that Super Mario is NPcomplete. However, I solved it as a child and saved the princess from Bowser. I’m not sure what the implications of this are for P vs. NP. Knowing that Super Mario is in NP saves me from having to learn it. I just reduce it to 3SAT.
All problems in NP have a local verification procedure because of the NPcompleteness of 3SAT. But we still need to know the whole proof, right?
There is a nice way to make proof verification completely local, and that is to introduce more than one prover. We introduce a familiar cast of characters: Merlin and Arthur. Arthur is the referee and is allowed to ask two noncommunicating Merlins to prove something to him. The value is defined to be . We need to ensure that this scheme is both sound and complete. A stronger version known as the PCP theorem has the following interesting property. Arthur can do some preprocessing and then just check a constant number of clauses to get a high probability of soundness.
The 3local Hamiltonian problem is a wellknown complete problem for QMA. Here one is given a local Hamiltonian on qubits with a promise that the ground state energy is either less than or greater than , with (the “promise gap”), and one must decide which is the case.
In the quantum setting, we allow the Merlins to share initial entanglement, but they canât communicate after that. Now the value is denoted by . The star denotes this additional quantum entanglement shared by the Merlins.
Can Arthur use the entangled Merlins to his advantage, to recognize languages beyond NP? The PeresMermin “magic square” game makes this unclear since at least in some cases the Merlins can “cheat” and use entanglement to increase the acceptance probability. But it was shown [KKMTV 08, IKM 09] that Arthur can still use entangled Merlins to at least recognize languages in NP. An aside: this, like most previous work, viewed the entanglement between the Merlins mostly as a drawback to be mitigated using techniques like monogamy relations that force the provers to use strategies that resemble unentangled strategies.
To illustrate the difficulties, suppose we have a 1D Hamiltonian with nearestneighbor interactions. Suppose that these are antiferromagnetic so that the unique ground state of each twoqubit term is the singlet, which we say has zero energy. This is of course highly frustrated and the groundstate energy will be proportional to the number of qubits. But a naive twoprover proof system would allow us to be convinced that the groundstate energy is zero. Suppose we can split the qubits into even and odd layers that are mutually commuting. We can have Merlin1 take the odd qubits and Merlin2 take the even qubits. We choose a random interaction, say on sites j and j+1, and ask Merlin1 for one of them and Merlin2 for the other. But this doesn’t work. The Merlins need only share a singlet state which they just return regardless of which question we ask.
The main result is a fiveplayer game for the 3local Hamiltonian problem. The messages from the Merlins to Arthur are quantum, but very short. The value of the game with classical questions, 3 answer qubits, and with 5 players is QMAhard to compute to within a factor. Consider the ground state of the Hamiltonian encoded in the 5qubit code. We will require the five Merlins to each have one share of these encoded qubits, so each Merlin has qubits.
The protocol is as follows. Pick a random clause on qubits and either:
The intuition is that the Merlins are forced to be in a code space, with the states of 4 Merlin constraining the fifth. How is this proven?
The most general Merlin strategy is to share a state and to respond to a request for qubit by applying a unitary , or to a request for with the unitary . We would like to argue that any such strategy can be turned into a method for extracting all qubits from the state .
This will be done using a method similar to a mosquito drinking blood: as it extracts blood it replaces the volume of liquid with its saliva. Here we extract a qubit using (actually ), and then replace those qubits with halves of EPR pairs and then each prover applies . Actually, the other halves of the EPR pairs are never used so even a maximally mixed state would work. The point is just to put something there, effectively performing a logical SWAP.
This work also leads to a natural reformulation of the quantum PCP conjecture: Constantfactor approximations to the entangled value of a quantum game with entangled provers are QMA hard. The result is a first step in this direction by solving the case of a factor.
Another consequence is for MIP and related complexity classes. refers to the class of problems with a multiprover interactive proof with completeness c and soundness s. In this language the PCP theorem implies that .
In the quantum case Thomas proved that in an earlier breakthrough. This work shows now that is contained in , proving for the first time that entanglement increases the power of quantum proof systems. Here “proving” is in the usual complexitytheory sense, where we have to make some plausible assumption: in this case, that .
During the questions, Ronald de Wolf and Daniel Gottesman pointed out that you might be able to reduce it from 5 Merlins to 4 by using errordetecting codes, or even 3 by using qutrit quantum errordetecting codes. Or what about 2 using approximate QECC? (This last probably won’t work.)
(This talk was presented by the heroic David Gosset because Sergey didn’t get a visa in time.)
The transverse Ising model (TIM) is important for several reasons. For one thing, this model is ubiquitous in the theory of phase transitions. It’s a good model of certain nonuniversal quantum devices like the Dwave machine. And the recent breakthrough of Cubitt and Montenaro shows that the TIM appears naturally as a possible intermediate complexity class.
We would love to understand the quantum annealing (QA) algorithm of Farhi et al., and unfortunately we won’t be able to do that here. But we can use it to help us understand a targetsimulator model that lets us do reductions of various Hamiltonian complexity problems. An example, if we have a simulator QA machine that has TIM Hamiltonians, then it is unlikely that we can simulate a target QA machine that has access to 2local Hamiltonians (which are BQP complete). The simulator TIM machine is contained in BQP postBPP, which is unlikely to equal BQP.
Recall the class of “stoquastic” local Hamiltonians. These are local Hamiltonians with “no sign problem”, meaning that all offdiagonal matrix elements in the computational basis are real and nonpositive. There is a natural complexity class, StoqMA, that captures the complexity of these Hamiltonians.
StoqQMA is like QMA but Arthur can only apply reversible classical gates (CNOT, Toffoli) and measure some fixed qubit in the X basis. He accepts iff the measurement outcome is +. He can use 0 and + ancillas.
StoqMAâs relation to other complexity classes:
( and SBP are approximate counting classes.)
Main result: The local Hamiltonian problem for TIM on degree3 graphs is StoqMAcomplete. This sharpens the Cubitt and Montenaro classification by linking the TIM directly to StoqMA.
In the ferromagnetic TIM, the coupling terms are all positive and the Zfield is uniform. Another result is a polynomialtime approximation algorithm for the partition function of the ferromagnetic TIM. This generalizes and in fact makes use of a result of Jerrum & Sinclair from 1993. The run time is polynomial: . Awesome. Taking the log of the partition function, one gets the free energy within an additive constant, and at low temperature, this approximates the ground state.
Given some empirically observable variables, which correlations between these are compatible with a presumed causal structure? This is a fundamental problem in science, as illustrated by the following situation. You would like to claim that smoking causes cancer. But all you have is correlation data… and correlation doesn’t imply causation. So when can we make that inference?
One of the main ideas of this work is to use entropies rather than probabilities in order to avoid headaches associated with nonconvex structure that appears in previous approaches to answering these types of questions.
Classically, Directed Acyclic Graphs (DAG) have edges encoding a causal relationship between the nodes that they connect. Conditional Independences (CIs) are encoded by a DAG; this is the causal structure. A given probability distribution is compatible with a given causal structure if it fulfills all of the CI constraints implied by a given DAG.
Marginal Scenarios: Usually, for a variety of reasons, not all variables in a DAG are observable.
Probabilities compatible with a causal structure are expressible by a polytope, within which they must fall, e.g. Bellâs theorem. However, in an example due to Geiger and Meek, which is symmetric and consisting of three unseen variables each causing two of the three observable events, A, B, and C, we have a geometry that is nonconvex.
Classically, going to the entropic description, we get a description of marginal causal entropic cones in terms of the entropic Bell inequalities framework pioneered by Braunstein & Caves in 1988.
A variant of this is the common ancestor problem.
Quantumly, there is not a unique definition of what the causal structure is. What should we use? Informally it should specify the dependencies between a collection of quantum states and classical variables. We use 3 kinds of nodes,
Because measurement affects a quantum system, some CIs that are classically valid cannot be defined in the quantum case. But independencies still hold and we can use data processing inequality. One consequence is a strengthening of the IC inequality, and allows it to be generalized eg to quantum messages. (see also 1210.1943).
What makes quantum computing work? Many explanations have been proffered in the past: entanglement, contextuality, superposition and interference, and even quantum discord. While it seems fair to say that we really have no idea what makes a quantum computer “work” so well, we do have some ideas for some specific models.
The main result of this talk is that contextuality is a necessary resource for universal quantum computation with magic states on rebits. Pro: these are twolevel systems; Con: these are not qubits, only “rebits”.
Merminâs square and Merminâs star are stateindependent proofs of contextuality. They look as if they will spoil the party. But in fact they help.
Previous work by Anders and Browne ’09 showed that contextuality powers measurement based quantum computation, and in 2014 Howard et al. showed that contextuality powers quantum computation with magic states.
In the setting of quantum computation by state injection, we have a restricted family of gates (e.g. Clifford gates) and we have some noisy version of a state that cannot be prepared from that gate set and starting with computational basis states. Which properties must a fiducial state possess in order to “promote” a restricted model of Clifford gates to a universal quantum computer? In work by Joe Emerson’s group, this was answered for oddprime dimensional qudits that we need:
Hudson’s theorem characterizes the set of pure states with nonnegative Wigner functions: it is precisely the set of state that are Gaussian, aka stabilizer states in the discrete setting. Clifford operations cannot introduce (Wigner) negativity through gate operations, and so that makes negativity a resource for initial magic states. All of this works out very cleanly in the case of oddprime dimensions.
Mermin’s magic square implies that not all contextuality can be attributed to states. This seems to ruin our “resource” perspective. Not all contextuality that’s present can be attributed to states. What’s worse, Mermin’s square yields a contextuality witness that classifies all 2qubit quantum states as contextual.
To deal with this, we move to rebits, so that the density matrix is restricted to be real with respect to the computational basis for all times. We also have to use a restricted gate set that is CSSpreserving, and purely Clifford. This also affects the state preparations and measurements that are allowed, but let’s skip the details.
Now there is a Hudson’s theorem: A pure nrebit state has a nonnegative Wigner function if and only if it is a CSS stabilizer state.
Wigner nonnegativity then implies that Pauli measurements on are described by a noncontextual hidden variable model. Equivalently contextuality implies negativity of the Wigner function, in our rebit/CSS setting.
There is no contradiction with the Mermin magic square because of the rebit property and the CSS restriction: we cannot measure ZX and XZ simultaneously. Only allX and allZ Pauli operators can be measured simultaneously.
We’ve already discussed the area law and its important implications for the efficient simulation of gapped 1dimensional spin systems. We believe that an area law holds for higherdimensional spin systems if the Hamiltonian is gapped. We also know that 1dimensional spinchains can in general be QMAcomplete, but the spectral gap shrinks in the hard cases.
These authors have a series of papers on this larger project.
On a spin chain with qudits and random interactions that are projectors of rank r, they showed that
Then Irani and GottesmanHastings used the type of Hamiltonians from 1d QMAhardness constructions to obtain Hamiltionians with 1/poly(n) gap and O(n) entropy. The local dimension is “O(1)” but in a CS rather than physics sense (i.e. the constants are big). Some condensed matter theorists have dismissed these Hamiltonians as “finetuned.”
Previous work by these authors had frustrationfree, 1/poly(n) gap and O(log n) entanglement, but this could still be explained away as being “critical” since this entropy scaling matched what one expected from conformal field theory.
The latest results, with entanglement entropy and the same guarantees on the spectral gap, do not match any of the condensed matter theoristsâ explanations. They only require spins of dimension 5, so they are much closer to a natural model.
The “Motzkin state” is the ground state in question here that they construct, as they use something called Motzkin paths to construct this Hamiltonian. The ground state of the original construction was a superposition of all Motzkin walks on a chain of length .
We say a Motzkin state is the superposition of all Motzkin walks. A Motzkin walk is a walk that starts at 0, ends at 0, remains nonnegative, and goes up or down or remains level at each intervening step. Our Mozkin walks can have two kinds of up step and two kinds of down step. Our Hamiltonian introduces a penalty for violating the Motzkin rule.
The reason that these states can lead to entanglement is that the amount that it goes “up” on the left half of a cut must equal the amount that it goes “down” in the right half.
Combinatorially we know how many Motzkin walks there are of height m, number of kinds of step s, and length n. Turning the sum into an integral and doing saddle point integration we get the entanglement.
One can implement the constraints of a colored Motzkin walk with local constraints, and these become the terms in the local Hamiltonian. You can get an upper bound on the spectral gap using the variational principle. The lower bound can be obtained using similar techniques as the previous results by the authors and others.
Is there a continuum limit for these models? Can we rigorously prove these results with an external magnetic field that can eliminate the need for boundary conditions? Are there frustrationfree Hamiltonians with unique ground states and no boundary conditions that violate the area law by large factors?
Faulttolerant gates are great, but they can’t be done in a transversal way and still be universal. This is the celebrated EastinKnill theorem. In the context of topological quantum errorcorrecting codes, there is a natural relaxation of the notion of transversal gates to finitedepth quantum circuits [BravyiKoenig’09].
Color codes are an interesting class of topological stabilizer codes that allows for a transversal implementation of a T gate in three dimensions. It generally saturates the BravyiKoenig bound on the Clifford hierarchy for transversality. Hector has generalized his color codes to a subsystem code version. The recent important paper by Paetznick and Reichardt introduced the notion of gauge fixing that lets us jump between different codes with different transversality restrictions, and this let’s us sidestep the EastinKnill theorem. The new gauge color codes can be combined with this gaugefixing idea to move between conventional color codes and gauge color codes. In these two codes, there are two different sets of operations that together are universal.
In typical faulttolerant methods, we make noisy syndrome measurements and we repeat them several times to avoid errors. The decoding step is a global classical calculation and the correction is a transversal quantum operation. Hector’s new paradigm of singleshot fault tolerance is a way to avoid the multiple rounds requirement in the old method.
3D color codes turn out to be singleshot fault tolerant (SSFT). This is because the gauge degrees of freedom have some redundancy and this can be used to make inferences about which stabilizer measurements might have been faulty. The notion of SSFT is closely linked to the notion of selfcorrection via the physical mechanism of confinement. As a simple example, consider the ferromagnetic Ising model in a symmetrybroken state below the critical temperature. Antialigned magnetic domains are confined in this phase. The following week Poulin gave a talk at Coogee that was skeptical about the possibility of singleshot fault tolerance. Definitely this notion needs to be made more precise.
Suppose we want to do faulttolerant error correction in a gauge color code. A faulty gauge syndrome will be one with endpoints, and we can repair the gauge syndrome, with the branching points of the result giving the new syndrome. Importantly, the gauge syndrome is unconfined; it is random except for the fixed branching points. The effective wrong part of the gauge syndrome, however, is confined. Each connected component has branch points with neutral charge. Therefore the branching points exhibit charge confinement. This sounds intriguing, but none of the Pontiffs really understand the details.
Gauge fixing to either Z or X with give you either a transversal implementation of either T or HTH, and this lets you perform arbitrary gates on the encoded logical qubit.
Bonus Result: 3d quantum computation, with local operations and constant time overhead, but global classical computation. This uses the 3d color code for computation with a stack of 2d codes for memory; see arXiv:1412.5079 for details.
We’ve previously blogged about this very interesting result here. But now there’s a problem! The thermodynamic considerations all still seem to hold, and the Hausdorff dimension of the code is still 3 or less, but the specific embedding theorem that Courtney had used previously doesn’t not apply. Therefore, it is still open if this code can be embedded in 3 dimensions with constant density. Courtney is currently working to fix the proof, but for now the embeddability of these codes is downgraded to a conjecture.
Courtney also gave a heuristic argument for why this embeddability conjecture might be true. In the limit of low lacunarity (which is basically a measure of how much a fractal violates translation invariance) there is a simple condition that says that the density of a projection stays bounded.
Two interesting tools that Courtney uses for the thermodynamic arguments that might have broader interest are the GKS inequality, which says that adding ferromagnetic interactions cannot reduce ferromagnetic order, and MerliniGruber duality, which is a duality similar to the KramersWannier duality used to prove the phase transition in the Ising model.
This talk takes a resourcetheory approach to thermodynamics. The goal is to extract work from a state, given various types of permitted free operations and a possibly limited ability to change the Hamiltonian by investing work. For each class of operations, the Gibbs state is a “free” state, analogous to product states in entanglement theory. The options are
How effective these different models are at extracting work depends on the class of Hamiltonians allowed. If any Hamiltonians are allowed then there is a collapse and weak thermal contact can do as well as GP maps (and of course also thermal operations), essentially extracting all surplus free energy of a state. If we restrict the class of possible Hamiltonians then separations between these models are possible, in part because it’s harder to deal efficiently with offdiagonal terms in the density matrix.
Warning for typical QIP attendees —
This first talk may have some implications for the real world đ
The solution of the Gaussian optimizer conjecture by Giovannetti, Holevo, and GarciaPatron has led to enormous progress on a web of interconnected connected conjectures about Gaussian channels. This includes the phase space majorization conjecture, the additivity and minimum output Renyi entropies conjectures, strongconverse theorems for the classical capacity of gaussian channels, the entropypower inequality conjecture, and more! The only remaining open questions are the entropyphoton number inequality and the quantum capacity.
The proof uses a decomposition of Gaussian channels. Every phaseinsensitive Gaussian channel is equivalent to a quantumlimited attenuator followed by a quantumlimited amplifier. Every phasecontravariant Gaussian channel is equivalent to a quantumlimited attenuator followed by a quantumlimited amplifier followed by a noiseless phase conjugation.
It also uses an old result about quantum beam splitters. How can the output of a beamsplitter be distinguished from two independent broadcasts of the same source? Answer: only if the broadcast was a coherent state. Therefore we have the “golden property” of a quantum beam splitter: the only input states producing pure output states for a quantum limited attenuator are coherent states.
The main result of this talk is a proof of the majorization conjecture. This addresses the question: what is the minimum noise or disorder achievable at the output state, optimizing over all possible input states? The output from a coherent input state majorizes all other output states. Namely, if is a Gaussian channel, then
for all .
The proof uses the decomposition results mentioned above, concavity of entropy, and the breakthrough results of Giovannetti, Holevo, and GarciaPatron. A similar result holds for any Renyi entropy for . There is also a phase space majorization result.
Here is an awesome summary of what this research project has accomplished.
Coming the morning after the conference dinner, Andreas beings with a “hangover summary” of the main results:
On the LHS we have the asymptotic zeroerror capacity of a channel, assisted either by nothing (), entanglement () or nosignalling correlations (). On the RHS we have the Lovasz theta number (see below) which can be calculated efficiently using semidefinite programming.
Classical zeroerror capacity depends only on the transition graph, not the probabilities. A useful notion is that of confusability graph, which is an undirected graph on the input symbols of the channel where two symbols are confusable if there is a nonzero probability of confusing them. This encodes the combinatorics of the channel in a natural way. For example, product channels have a product confusability graph whose adjacency matrix is the tensor product of the confusability graphs of the factor channels. (More precisely .)
The largest codebook for this channel is given by the independence number of the confusability graph; this gives the optimal zeroerror capacity. It’s unfortunately NPcomplete to compute this property of a graph, and even approximating it is NP hard. However, a seminal result due to Lovasz gives a convex relaxation of this into a semidefinite program, the Lovasz theta number.
Another useful bound is the socalled fractional packing number, obtained by writing down the natural integer program for independence number and replacing the constraint with , then observing that the upper bound is superlative:
.
Minimizing over consistent with gives .
Crucially, both the Lovasz theta number and the fractional packing number are multiplicative, and therefore they bound not only the independence number, but also the capacity. Andreas said something surprising: If is multiplicative and satisfies , then we must have ! This is unpublished still, but follows from the beautiful formula [personal communication from Winter, via my memory, so hopefully not too wrong]:
.
Compare with a similar formula for the fractional packing number:
.
An example, in fact the usual example, is the “typewriter graph”. This has an input and output . Then , , (thus giving a tight upper bound on the capacity) and .
We would like to close the gap provided by the above relaxations by allowing additional resources in the encoding and decoding. Shannon did this in his 1956 paper by allowing feedback, and we can also add entanglement and nosignaling correlations.
The Lovasz theta number is in fact an upper bound to the entanglementassisted zeroerror capacity, which sometimes is larger than the unassisted zeroerror capacity. This gives an operational (but partial) explanation of why the Lovasz theta function is not always a tight upper bound.
On the other hand, the fractional packing number is equal to the nonsignallingassisted capacity, which is an awesome and nontrivial result.
Now what about the quantum case? Here things get even cooler, although not everything is as well understood. For quantum channels, the quantum generalizations of the transition and confusability graphs are given by the following correspondences:
and .
The authors have also also generalized (as in Will Matthews’ talk earlier) the notion of nosignalling operations to ones that take quantum inputs and outputs. Just as the classical nosignalling polytope is a small linear program, this quantum set has a nice SDP characterization.
Now consider . This is
.
Then the “most amazing thing”, according to Andreas, is the following theorem:
.
Note that this is not just an inequality, but an actual equality!
They show actually that .
The proof is reminiscent of the recent and inexplicably notinvited FawziRenner breakthrough on the quantum conditional mutual information.
Now in terms of , instead of , , which is the first informationtheoretic application of the Lovasz theta function.
Some of the closing remarks were that
This talk is about producing highquality (i.e. nearly uniform) randomness from [necessarily longer] strings that contain “lowerquality” randomness. Tools for doing this are called extractors; other closely related tools are condensers and expanders. Why would we need to do this? Well suppose we start with some uniform randomness and then leak some information about it to an adversary (Eve). Then Eve’s subjective view of our randomness is no longer uniform. Applying an extractor can make it again close to uniform. Even in a noncryptographic setting they can be useful. Suppose we run a randomized algorithm using some large uniformly random seed. Conditioned on the output of this algorithm, the seed will no longer be uniformly random. If we want to continue using it, maybe even for later steps of a larger algorithm, we will need to clean it up using something like an extractor.
This establishes the need for extractors and their crucial property: their goal is to make a string random conditioned on some other string (e.g. Eve’s knowledge, or the earlier outputs of the randomized algorithm; call this the “side information”). This talk will consider the case when side information is in fact quantum. Conditioning on quantum things is always tricky, and one hint that the math here will be nontrivial is the fact that “operator space theory” is in the title.
One unpleasant fact about extractors is that they cannot be deterministic (this is a useful exercise for the reader). So they usually use some extra input, guaranteed to be uniform and independent of the source. The smallest possible seed size is logarithmic in the input size. Also the number of output bits can be (more or less) no larger than the minentropy of the source. So the key parameters are the input size, the input minentropy, the input seed size, the number of output bits and their trace distance from being uniform and independent of the side information.
One example of a nice classical result is the leftover hash lemma.
Can we still get the same result if adversary is quantum? This will have implications for privacy amplification in q crypto and also (not sure why about this one) properties of q memory.
Here minentropy becomes the conditional minentropy, which involves a maximization over all guessing strategies and measures knowledge of an adversary with access to a q system correlated with the source.
The main result here is a mathematical framework to study this, based on operator space theory. Why operator space? We can define C(Ext, k) to be the maximum advantage for a classical adversary against the extractor Ext on sources with k bits of minentropy. The key insight is that this is a norm, specifically an operator norm. Sources with bits of minentropy correspond to probability distributions (i.e. ) with all entries , (i.e. ). The intersection of these two constraints defines a centrally symmetric convex set (if we subtract off the uniform distribution, I suppose), which can then define a norm. One other example of such a hybrid norm is the family of Sobolev norms. Now the achievable bias is like the maximum 1norm of the output of such a vector when we act on it with the extractor Ext, which we can think of a linear map. So this is an operator norm, i.e. the norm of Ext is the max of the norm of Ext(x) divided by the norm of x, where we measure the numerator in the norm and the denominator in this funny hybrid norm.
The advantage of a quantum adversary is Q(Ext, k) which is like the cb (completely bounded) version of the above. Basically suppose that instead of the entries of x being numbers they are matrices. This is kind of like what team Madrid did with nonlocal games.
They also define an SDP relaxation which has the advantage of being efficiently computable.
It follows trivially from the definitions that
.
The nontrivial stuff comes from the fact that the SDP relaxations are in some cases analytically tractable, e.g. showing that smalloutput and high input entropy extractors are quantumproof (i.e. secure against quantum adversaries).
A nice open question: Given that there is an SDP, can we define a convergent hierarchy?
The basic idea of a unitary design is that it is a discrete set of unitary operators that exactly reproduce the first moments of the Haar measure, i.e. the uniform measure on the unitary group. This construction lets us sample efficiently and still reproduce important properties of the Haar measure. But we don’t just want constructions with small cardinality, we also want them to have efficient gate complexity.
The speaker jokingly says, “unfortunately, these have a lot of applications.” This includes randomized benchmarking, decoupling and errortolerant QKD schemes. There is another joke about how decoupling doesn’t mean breaking up (romantic) couples. But if Alice and Bob are entangled, and one passes through a decoupling map, then they end up nearly separable! That sounds like a breakup to me. Note that a weaker randomizing map would merely leave them with a data hiding state, which I suppose corresponds to the facebook status “It’s Complicated.” These jokes are still in focus groups, so please forgive them.
We would like to have unitary designs that have low gate complexity. The main result of this talk is a Cliffordonly unitary 2design that uses gates, but it assumes an extension of the Riemann hypothesis. Without this hypothesis, they also have a nonClifford construction with the same complexity, and a Cliffordonly scheme with complexity . Sampling uniformly from the Clifford group has gate complexity , so this is an improvement.
One can construct these 2designs by writing gates in the form , where the matrix has a decomposition with low complexity that is the main technical contribution of the authors. By using the result that Pauli mixing by conjugation implies unitary 2design, one only needs to consider certain forms of the matrix . Ronald de Wolf suggested that these results could be improved a bit further by using the fast integer multiplication algorithms due to FĂźrer.
Carl opens his talk by saying that this result has been a goal of the U of M group for the past four years. It then proceeds in suitably epic fashion by looking up the definition of randomness from The Urban Dictionary (highlight: “A word often misused by morons who don’t know very many other words.”) and moving onto modern cryptographic standards (skipping, though, this gem).
This talk follows the research program laid out by Colbeck in 2006, where he suggested that one might take the output of some CHSH games, verify that the winning probability is higher than the optimal classical value, and then apply a deterministic extractor. Difficulties here involve quantum side information, information locking, and the other nuisances that composable security was invented to address. VaziraniVidick showed in 2011 that this was possible if in the honest case the Bell inequality is violated optimally, last QIP Miller and Shi showed it worked with some notquiteperfect value of the CHSH game and the current result extends this to any beyondclassical value of the game.
One the key technical ingredients is a new uncertainty principle.
We will not fully do it justice here. Roughly it is as follows. Define
and where and are the postmeasurement states resulting from a pair of anticommuting measurements on . Given this, the MillerShi uncertainty principle states that the pair must fit into a particular region of the plane. (see eq (2.2) of their paper for a bit more detail.)
The proof uses the “uniform convexity of the norm” due to Ball, Carlen, & Lieb (see also this note). Carl suggests there are more applications. Fernando Brandao and I [Aram] have one! Stay tuned.
At one point Carl says: “By an inductive argument (which actually takes 20 pages), we see thatâŚ”
The result can then be generalized beyond XZ measurements to any pair of noncommuting measurements.
Now what about totally untrusted devices? These too can be forced, using a VVlike approach, to act in a way as though they are performing noncommutative measurements.
This can be generalized even further to KochenSpecker inequalties and contextuality games, modulo some mild assumptions on how the device works.
open problems: what resources are used? This uses a linear amount of entanglement, for example. For any experimentalists watching (for whom entanglement is scarce and randomness abundant), this part of the talk must sound funny.
SolovayKitaev was all about geometry, but recently the field of quantum compiling has shifted to algebraic number theory, following the pathbreaking 2012 paper by Kliuchnikov, Maslov, and Mosca.
Some number theory: In the ring define and the norm .
Applying the bullet operation (a Galois automorphism of the number field) to a compact interval yields a discrete unbounded set. The 1d grid problem is, given finite intervals , find an element of . Generically there will be solutions, and these are easy to find when both and are large, corresponding to a fat rectangle (when is viewed as a 2d lattice). When we have a long thin rectangle we can convert it into this case by rescaling.
The 2d grid problem: Now we consider for . This turns into a more complicated set intersection problem on a 2d lattice whose details we omit. But just like the rescaling used for 1d, now we use a more complicated set of shearing operators to transform the problem into one where the solutions are easier to find. But we need an extra property: the “uprightness” of the rectangles.
Main Theorem: Let A and B be two convex sets with nonempty interior. There is is a grid operator such that both and are both 1/6 upright. Moreover, it can be efficiently computed.
Theorem [MatsumotoAmano ’08]
Every Clifford + T singlequbit operator W can be uniquely written in the form
W = (T  eps) (HT  SHT)^{*} C,
where C is a Clifford.
Exact synthesis of the Clifford+T operators was solved by Kliuchinikov, Maslov, and Mosca. If You have a 2by2 unitary operator, then it is Clifford+T iff the matrix elements are all of the form where . Moreover, if , then the Tcount of the resulting operator is equal to .
The upshot of this is that if you have a factoring oracle then the algorithm gives circuits of optimal Tcount. In the absence of such an oracle, then this returns a nearly optimal Tcount, namely the secondtooptimal Tcount plus a term of order .
Are there any interesting sets of beamsplittertype quantum gates that don’t generate either only permutation matrices or arbitrary unitaries? The main result is that if one has a twolevel unitary of determinant 1 and with all entries nonzero, then it densely generates SU(m) or SO(m) for . In other words, any beamsplitter that mixes modes is universal for three or more modes. Another way to say this is that for any beamsplitter , either is efficiently classically simulable or else it is universal for quantum optics. This is a nice dichotomy theorem. The proof requires the classification of the finite subgroups of SU(3), which was, surprisingly, completed only in 2013. (It was written down in 1917 as “an exercise for the reader” but educational standards have apparently come down since then.)
2014 has seen some of the most exciting news for quantum conditional mutual information since strong subadditivity.
The goal of this talk is to try to find a large class of interesting states such that quantum state tomography and quantum state verification can be done efficiently. We would like to also have that if a state is in then we can efficiently verify that fact.
At one point in the intro, Isaac says that a point is “…for the experts, err, for the physicists.” Then he remembers who his audience is. Later he says “for this audience, I don’t have to apologize for the term ‘quantum conditional mutual information'”.
This talk proposes a class that has this verification property in time where is the number of particles. Any state in is defined by a set of O(1)particle local reduced density operators.
Isaac first reviews the approach to matrix product state tomography, which uses the parent Hamiltonian of a matrix product state to show that any (injective) MPS is completely determined by a local set of observables. This result is a generalization of the MPS tomography result to higher dimensions, but with the important difference that there is no need for a global wavefunction at all!
The main result, roughly speaking, is that there exists a certificate such that
.
Local density matrices can (approximately) reconstruct the global state. Previously this was done assuming that the global state is a lowbonddimension MPS. But that’s in a sense circular because we have to make a (strong) assumption about the global state. This work allows us to do this purely using local observables. Awesome.
The talk proceeds by giving a reconstruction procedure by which a global state can be reconstructed from the marginals of a state , and by showing conditions under which . The global state is completely determined by the local reduced density operators if the conditional quantum mutual information is zero.
Theorem 1: If and then
This is nice, but the upper bound depends on global properties of the state. (Here we think of AB and BC as “local” and ABC as “global.” Don’t worry, this will scale well to qubits.)
To proceed we would like to upper bound the QCMI (quantum conditional mutual information) in terms of locally observable quantities. This is a straightforward but clever consequence of strong subadditivity. Specifically
for any system .
For judicious choice of regions, and assuming a strong form of the area law conjecture ( in 2d), this bound is effective.
We can then build up our reconstruction qubitbyqubit until we have covered the entire system.
Applications:
At ultrahigh energies, we need a theory of quantum gravity to accurately describe physics. As we move down in energy scales, vanilla quantum field theory and eventually condensedmatter physics become relevant. Quantum information theory comes into play in all three of these arenas!
One of the principal claims of this talk is that quantum information is going to be central to understanding holography, which is the key concept underlying, e.g., the AdS/CFT correspondence. Quantum gravity theories in negatively curved spacetimes (saddlelike geometries) seem to have a duality: one Hilbert space has two theories. The duality dictionary helps us compute quantities of interest in one theory and map the solutions back to the other theory. It turns out that these dictionaries often involve computing entropic quantities.
A funny feature of AdS space is that light can reach infinity and bounce back in finite time! This acts just like boundary conditions (at infinity) and simplifies certain things. Inside the bulk of AdS space, geodesics are (in the Poincare disk model) just circular arcs that meet the boundary at right angles. When there is matter in the bulk, these get deformed. We’d love to be able to interpret these geometric quantities in the bulk back in the CFT on the boundary.
The RyuTakayanagi correspondence links the length of a geodesic in the bulk with the entanglement entropy in the CFT on the boundary. This can be related (using e.g. work by Calabrese & Cardy) to the central charge of the CFT. A really neat application of this correspondence is a very simple and completely geometric proof of strong subadditivity due to Headrik and Takayanagi (with the important caveat that it applies only to a restricted class of states, namely those states of CFTs that have gravity duals).
The new result here is to generalize the RyuTakayanagi correspondence to general curves, not just geodesics. The main result: there is a correspondence between the length of convex curves in the bulk and the optimal entanglement cost of streaming teleportation communication tasks that Alice and Bob perform on the boundary.
We promise we’ll finish posting these soon! Day 3 was only a half day of talks with a free afternoon, and the rainy weather of the first two days finally subsided just in time.
LDPC codes are families of sparse codes, meaning that the stabilizers are all low weight, and we usually also require that each qubit partakes in at most a constant number of stabilizers. That is, the parity check matrix is both row and column sparse with constant sparsity per row and column.
The classical versions of these codes are ubiquitous; your cell phone uses LDPC codes, for example. One of the principle advantages of (classical) LDPC codes is that they have good minimum distance, constant rate, and have fast nearly optimal decoders. A new variant called spatially coupled LDPC codes are a universal way (i.e., independent of the channel) to get capacity for most memoryless channels of interest with a lowcomplexity decoder.
Unfortunately, the quantum case is largely open still, and the speaker admitted in the first moment that he wasn’t going to solve everything during his talk. Leaving aside the question of distance of quantum LDPC codes, one of the reasons why decoding a quantum code is hard is that any error has the same syndrome as if is in the stabilizer group. This added degeneracy creates an extra freedom that is difficult to account for in a decoder. You want to find the most likely coset, rather than the most likely error.
In a classical LDPC code, the Tanner graph (the bipartite graph whose nodes are bits and checks with edges whenever a collection of bits partake in the same check) is nearly treelike (it’s an expander) and so message passing algorithms perform nearly at capacity. In the quantum case, the Tanner graph contains many 4cycles, and message passing decoders get stuck in loops and don’t converge.
Bravyi, Suchara, and Vargo have recently had a breakthrough where they were able to decode a very special quantum LDPC code near the hashing bound: the toric code. But didn’t we already know how to decode the toric code? Yes, of course, there are many good decoders for this code. However, none of them achieve a performance that decodes X and Z errors simultaneously with performance approximately equal to the maximum a posteriori (MAP) decoder.
There are also a new idea, pioneered for polar codes by Renes, Dupuis, and Renner and expanded by Delfosse and Tillich, to use asymmetric codes and decoders. If you are going to decode X first and then Z, why not use a stronger code for X errors, decode them first, and then use the advantage to help you decode the Z after that? Using this idea, you can provably achieve the hashing bound asymptotically.
He concludes with some bright spots: recent global analysis of local decoders (such as the recent work by Hastings on hyperbolic codes), and tricks like entanglementassisted codes or spatially coupled quantum LDPC codes might lead to capacity achieving decoders soon.
A great intro: âMy paper is at 1408.1720 + PRA…I forgot the full journal reference. You can look it up, which is what people usually do.â Or they just get it from the arxiv!
Faulttolerant logical gates are central objects in quantum computation. We need them to suppress the spread of errors in computations. Unfortunately, there is a somewhat stringent nogo theorem due to Eastin and Knill that says you cannot have a universal set of transversal gates. (There are various ways around this theorem, such as recent work by Paetznick and Reichard.)
Recently, Bravyi and Koenig have generalized this work. They extend the notion of transversal gate to the idea of a constantdepth circuit and show that constantdepth circuits restrict the allowable gates performable in topological codes to certain levels of the GottesmanChuang hierarchy. The main idea of the proof is to use a cleaning lemma. If you can split your code into correctable regions, then you can only implement gates in level of the Clifford hierarchy. In particular, dimensional topological codes can be split into correctable regions, so we can only implement gates at level in the hierarchy.
What Yoshida and Pastawski have done is show that if you have a code with a loss tolerance threshold for some number , then any transversal gate must be in . If you have a logic gate, then .
Another result is that if a stabilizer Hamiltonian in 3dimensions has a faulttolerantly implementable nonClifford gate, then the energy barrier is constant. The proof is simple: we can split the 3d chunk of code into correctable regions that have stringlike shapes, so there must exist a stringlike logical operator for these codes, hence the energy barrier is constant. Beautifully simple!
If you have a topological stabilizer code in dimensions with an th level logical gate, then the code distance is at most . This improves a bound of BravyiTerhal that didn’t take advantage of this factor.
Lastly, we can consider subsystem codes. In dimensions, faulttolerant gates are in (as long as the code distance grows at least logarithmically in the system size).
For the remaining session, it turns out that no Pontiffs were in the audience. However, we had mostly seen these results in other venues or read them before. They are great results, so go read the papers! We hope the trackback to the arxiv is a small compensation to the authors for our failure to blog these interesting talks.
From the team that brought you “QIP 2015 Day 1 liveblogging“, here is the exciting sequel. Will they build a quantum computer? Will any complexity classes collapse? Will any results depend on the validity of the Extended Riemann Hypothesis? Read on and find out!
Praise from a reader of “day 1”:
QIP 2015 liveblogging — it’s almost like being there. Maybe better.
Rather than “bore us” (his words) with experimental details, Dave gave a broadbrush picture of some of the progress that his lab has made over the years at improving the coherence of quantum systems.
Dave gave a history of NIST looking for more accurate clocks. Recently, a trapped nearUV transition of Hg ions at last did better than the continually improving microwave Cs standard.
At a 1994 conference at NIST, they invited Artur Ekert to speak about quantum gates. Cirac and Zoller gave the first detailed proposal for quantum computing with a linear ion trap at about this time. They were quickly able to demonstrate one of these gates in a linear ion trap.
He showed and discussed a picture of the racetrack planar iontrap array, where ions are moved into position to perform gates throughout the trap. They can move an manipulate the ions using a scheme due to Milburn, Schneider, James, Sorenson, and Molmer that uses position dependent dipole forces. The transverse Ising model can be simulated by applying a moving standing wave to ions in a linear trap; this is a test case for useful simulations.
Other groups at NIST have also done impressive work on quantum simulation. Bollinger’s group has made a selfassembled triangular lattice with Isingtype couplings that we talked about previously here on the Pontiff.
Everyone in the ion trap business is plagued by something called “anomalous heating”, of unknown origin, which gets worse as the length scale gets smaller. Colleagues studying surface science have suggested using an argon ion cannon (damn, that sounds impressive) to blast away impurities in the surface trap electrodes, scrubbing the surface clean. This has reduced anomalous heating 100 fold, but itâs still above all known electronic causes. Using cryogenic cooling helps too, as has been done by Ike Chuangâs group at MIT.
Laser intensity fluctuations at the site of the ions is another continual source of error. Optical and IR beams can be efficiently transmitted and positioned by optical fibers, but UV beams create color centers and degrade optical fiber on a timescale of about an hour. Recent work by the group has shown that this degradation timescale can be extended somewhat.
Dave showed a list, and there are about 30+ groups around the world working on iontrap quantum information processing. Pretty impressive!
Dave showed this Time magazine cover that calls DWave the âInfinity Machineâ that no one understands. In contrast, he says, we know how quantum computing works… and how it doesnât. Sober experimentalists seem to be in rough agreement that
Randomized benchmarking is a standard method whereby experimental implementations of quantum gates can be assessed for their averagecase accuracy in a way that doesn’t conflate the noise on the gates with the noise of state preparation and measurement (SPAM) errors.
The protocol is simple:
Repeat this for many random sequences and many repetitions of the each sequence to get statistics. Under a certain noise model called the “0th order model”, the averages of this procedure for different values of will fit to a model of the form where is a quantity closely related to the average quality of the gates in the sequence. Define to be the average error rate. (Morally, this is equivalent to “1f”, in the above model, but the actual formula is more complicated.) To understand the convergence of this protocol to an estimate, we need to understand the variance as a function of .
The main contribution is to reduce the variance bound from the trivial bound of to . This provides a good guide on how to choose optimal lengths for experiments, and the bounds are nearly exact in the case of a single qubit. In the parameter range of interest, this improved over previous estimates of the sample complexity by three orders of magnitude.
The full version is not yet on the arxiv, but due to an author mistake, the above link gives the long version of QIP submission. Download it there while you still can!
Quantum manybody systems are pretty wild objects, with states in dimensions or even worse. But we often have a mental models of them as basically like noninteracting spins. In some cases, the renormalization group and other arguments can partially justify this. One thing thatâs true in the case of noninteracting spins is that the density of states is approximately Gaussian. The idea here is to show that this still holds when we replace ânoninteracting spinsâ with something morally similar, such as exponentially decaying correlations, boundedrange interactions, etc.
This way of writing it makes it sound trivial. But major open questions like the area law fit into this framework, and proving most statements is difficult. So technical advances in validating our “finite correlation length looks like noninteracting spins” intuition can be valuable.
Today’s technical advance is a quantum version of the BerryEsseen theorem. The usual BerryEsseen theorem gives quantitative bounds on the convergence to the mean that we get from the central limit theorem. Here we consider a lattice version, where we consider spins on a ddimensional lattice and local observables A and B that act on subsets of spins separated by a distance L. We require a finite correlation length, as we get for example, for all Gibbs states above some critical temperature (or at any nonzero temperature in D=1).
What does a (quantitative) CLT give us beyond mere large deviation bounds? It shows that the density of states (at least those inhabited by the particular state ) is roughly Gaussian thereby roughly matching what we would get from a tensor power state. This is somewhat stronger than the “typical subspace”type guarantees that we would get from a large deviation bounds.
The main application here is an equivalence theorem between the canonical and microcanonical ensembles: i.e. between the Gibbs state and a uniform mixture over an energy band of width . These states are far apart in trace distance, but this paper shows that they look similar with respect to sufficiently local observables. If you think this sounds easy, well, then try to prove it yourself, and then once you give up, read this paper.
How efficiently can we prepare thermal states on a quantum computer? There is a related question: how does nature prepare states? That is, what is the natural rate for thermalization given a quantum lattice system? There are two possible ways to model thermalization, both of which are computationally efficient. “Davies generators” mean local jumps that can be modeled as local interactions with a Markovian bath at a fixed temperature, while “heatbath generators” mean that we repeatedly apply the Petz recovery map to small blocks of spins. Call both “Gibbs samplers.”
Consider the setting where you have a system living on a lattice with a bit or qubit on each site, and some memoryless, spatially local, dynamics. Classically the powerful tools of DLR (DobrushinLanfordRuelle) theory imply a close relation between properties of the dynamics and properties of the stationary state. Specifically, spatial mixing (meaning decaying correlations in the stationary state) can be related to temporal mixing (meaning that the dynamics converge rapidly to the stationary state). (The best reference I know is Martinelli, but for a more CSfriendly version, see also this paper.)
An exact quantum analogy to this cannot be reasonably defined, since the classical definition involves conditioning – which often is the reason classical information theory ideas fail to translate into the quantum case.
One of the first contributions of this work then is to define quantum notions of “weak clustering” (more or less the familiar exponential decay of correlations between wellseparated observables) and “strong clustering” (a more complicated definition involving overlapping regions). Then the main result is that there is an intimate connection between the rate of convergence of any quantum algorithm for reaching the Gibbs state and the correlations in the Gibbs state itself. Namely: strong clustering (but not weak clustering) is equivalent to rapid mixing of the Gibbs sampler. Everything here assumes commuting Hamiltonians, by the way. Also, “rapid mixing” is equivalent to the Gibbs sampler being gapped (think of this like the quantum version of being a gapped Markov chain).
One direction is fairly straightforward. To show that strong clustering implies a gapped Gibbs sampler, we directly apply the variational characterization of the gap. (The dynamics of a continuoustime Gibbs sampler can be written as for some linear superoperator , which we will assume to be Hermitian for convenience. has all nonnegative eigenvalues because it is stable, and it has a single eigenvalue equal to 0, corresponding to the unique stationary distribution. The gap is given by the smallest positive eigenvalue, and this “smallest” is what gives rise to the variational characterization. See their paper for details.) The variational calculation involves a minimization over (global) states and strong clustering lets us reduce this to calculations involving local states that are much easier to bound.
In the other direction (gap implies strong clustering), we relate the Gibbs sampler to a local Hamiltonian, and use the detectability lemma, which in fact was originally used in part to prove a statement about decay of correlations. The idea is to construct an AGSP (approximate groundstate projector) which is a lowdegree polynomial of the Hamiltonian. Because it’s low degree, applying it does not increase the entanglement across any cut by much (useful for proving area laws) or does not propagate correlations far (along the lines of LiebRobinson; useful for correlation decay).
When can these results be applied? In 1D, strong and weak clustering are equivalent (because boundary terms can be removed), and therefore both are implied by (Hamiltonian) gap. Also in any number of spatial dimensions, above a universal critical temperature the Gibbs samplers are always gapped.
Some open questions:
Characterizing topological quantum order is a challenging problem in manybody physics. In two dimensions, it is generally accepted that all topologically ordered ground states are described (in a longrange limit) by a theory of anyons. These anyonic theories have characteristic features like topologydependent degeneracy and local indistinguishability in the ground space and stringlike operators that map between these ground states.
The most famous example of this is Kitaev’s toric code, and we are interested in it at a quantum information conference because of its ability to act as a natural quantum errorcorrecting code. The four ground states of the toric code can be considered as a loop gas, where each ground state is a uniform superposition of all loops on the torus satisfying a given parity constraint.
The goal in this talk is to classify types of topological order using the formalism of matrix product states, and their slightly more general cousins, matrix product operators (MPO). The authors define an algebra for MPOs that mimics the algebra of loop operators in a topologically ordered material. Because matrix product operators have efficient descriptions classically, they are well suited to numerical studies, and their structure also allows them to be used for analytical investigations.
The main idea that the authors introduce is a condition on MPO operators so that they behave like topological operators. In particular, they obey a “deformation” condition that lets them be pushed around the lattice, just like Wilson loops.
The authors used this idea to study models that are not stabilizer codes, such as the double semion model and more generally the class of stringnet models. This looks like a very promising tool for studying topological order.
Steve: “Counterexamples to the generalized area law” is an implicit admission that they just disproved something that nobody was conjecturing in the first place. đ
Aram: I’ll blog about this later.
This paper introduces a new formalism called the “XS stabilizer” formalism that allows you to describe states in an analogous way to the standard stabilizer formalism, but where the matrices in the group don’t commute. The collection of matrices is generated by , where and on qubits. A state or subspace that is stabilized by a subgroup of these operators is said to be an XS stabilizer state or code. Although these are, as Xiaotong says, “innocentlooking tensor product operators”, the stabilizer states and codes can be very highly entangled.
One of the applications of this formalism is to classify the double semion model, which is a local Hamiltonian model with topological order. There are sets of general conditions for when such states and codes can be ground states of local commuting XS Hamiltonians. Unfortunately, not all of these properties can be computed efficiently; some of these properties are NPcomplete to compute. There are some interesting open questions here, for example what class of commuting projector Hamiltonians ground states are in NP?
“Information complexity” is a variant of communication complexity that measures not the number of bits exchanged in a protocol but the amount of “information”, however that is defined. Here is a series of tutorials for the classical case. Entropy is one possibility, since this would put an upper bound on the asymptotic compressibility of many parallel repetitions of a protocol. But in general this gives up too much. If Alice holds random variables AC, Bob holds random variable B and Alice wants to send C to Bob then the cost of this is (asymptotically) .
This claim has a number of qualifications. It is asymptotic and approximate, meaning that it holds in the limit of many copies. However, see 1410.3031 for a oneshot version. And when communication is measured in qubits, the amount is actually .
Defining this correctly for multiple messages is tricky. In the classical case, there is a welldefined “transcript” (call it T) of all the messages, and we can define information cost as , where X,Y are the inputs for Alice and Bob respectively. In the quantum case we realize that the very idea of a transcript implicitly uses the principle that (classical) information can be freely copied, and so for quantum protocols we cannot use it. Instead Dave just sums the QCMI (quantum conditional mutual information) of each step of the protocol. This means when Alice sends to Bob and when Bob sends to Alice. Here refer to the entire systems of Alice/Bob respectively. (Earlier work by Yao and CleveBuhrman approached this in other, less ideal, ways.)
When minimized over all valid protocols, Dave’s version of Quantum Information Complexity represents exactly the amortized quantum communication complexity. This sounds awesome, but there are a bunch of asterisks. First “minimized over all valid protocols,” is an unbounded minimization (and some of these protocols really do use an infinite number of rounds), although it is in a sense “singleshot” in that it’s considering only protocols for calculating the function once. Also “amortized” here is not quite the same as in Shannon theory. When we talk about the capacity of a channel or its simulation cost (as in these sense of reverse Shannon theorems) we usually demand that the block error rate approach zero. In this case, the information complexity is defined in terms of an error parameter (i.e. it is the minimum sum of QCMI’s over all protocols that compute the function up to error ). This then corresponds to the asymptotic cost of simulating a large number of evaluations of the function, each of which is allowed to err with probability . The analogue in Shannon theory is something called ratedistortion theory.
Before you turn up your nose, though, the current talk gets rid of this amortized restriction. QIC (quantum information complexity) is easily seen to be a lower bound for the communication complexity and this work shows that it is also an upper bound. At least up to a multiplicative factor of and an additive term that also scales with the number of rounds. Since QIC is also a lower bound for the above amortized version of complexity, this proves a direct sum theorem, meaning that computing function values costs as much as one function evaluation. Here the weak amortized definition actually makes the result stronger, since we are proving lower bounds on the communication cost. In other words, the lower bound also applies to the case of low blockwise error.
The technical tools are the oneshot redistribution protocol mentioned above (see also this version) and the JainRadhakrishnanSen substate theorem (recently reproved in 1103.6067 and the subject of a press release that I suppose justifies calling this a “celebrated” theorem). I should write a blog post about how much I hate it when people refer to “celebrated” theorems. Personally I celebrate things like Thanksgiving and New Year’s, not the PCP theorem. But I digress.
Is the quantum capacity of a quantum channel our field’s version of string theory? Along the lines of this great Peter Shor book review, quantum Shannon theory has yielded some delightful pleasant surprises, but our attempts to prove an analogue of Shannon’s famous formula has turned into a quagmire that has now lasted longer than the Vietnam War.
Today’s talk is the latest grim news on this front. Yes, we have a capacity theorem for the (unassisted) quantum capacity, the famous LSD theorem, but it requires “regularization” meaning maximizing a rescaled entropic quantity over an unbounded number of channel uses. Of course the definition of the capacity itself involves a maximization over an unbounded number of channel uses, so formally speaking we are not better off, although in practice the capacity formula can often give decent lower bounds. On the other hand, we still don’t know if it is even decidable.
Specifically the capacity formula is
,
where is maximized over all inputs to n uses of the channel and is the coherent information (see paper for def). In evaluating this formula, how large do we have to take n? e.g. could prove that we always have ? If this, or some formula like it, were true then we would get an explicit upper bound on the complexity of estimating capacity.
The main result here is to give us bad news on this front, in fairly strong terms. For any they define a channel for which but .
Thus we need an unbounded number of channel uses to detect whether the quantum capacity (ie the regularized coherent information) is even zero or nonzero.
The talk reviews other nonadditivity examples, including classical, private, zeroerror quantum and classical capacities. Are there any good review articles here?
Here’s how the construction works. It builds on the SmithYard superactivation result, which combines an erasure channel (whose lack of capacity follows from the nocloning theorem) and a PPT channel (whose lack of capacity follows from being PPT). The PPT channel is chosen to be able to send private information (we know these exist from a paper by H3O) and by using the structure of these states (further developed in later threeHorodeckiandanOppenheim work), one can show that combining this with an erasure channel can send some quantum information. Specifically the PPT channel produces a “shield” which, if faithfully transmitted to Bob, enables perfect quantum communication.
This new construction is similar but uses a shield with many parts any one of which can be used to extract a valid quantum state. On the other hand, the erasure probability is increased nearly to one, and noise is added as well. Proving this is pretty tough and involves sending many things to zero or infinity at varying rates.
During question period prolonged jocular discussion triggered by John Smolin saying title was inappropriate, since the authors had clearly shown that by examining the parameters of the channel the quantum capacity was positive, so detecting positivity of capacity required no channel uses.
D. Gottesman suggested a more operational interpretation of title, given a black box, how many uses of it are needed to decide whether its quantum capacity was positive. If it was, e.g. an erasure channel with erasure probability very close to 1/2, arbitrarily many uses would be needed to confidently decide. It’s not clear how to formalize this model.
By the way, better Shannon theory news is coming in a few days for bosonic channels with the talk by Andrea Mari.
Jan 14 update at the end.
The three Pontiffs are reunited at QIP 2015 and, having forgotten how painful liveblogging was in the past, are doing it again. This time we will aim for some slightly more selective comments.
In an ideal world the QIP PC would have written these sorts of summaries and posted them on scirate, but instead they are posted on easychair where most of you canât access them. Sorry about this! We will argue at the business meeting for a more open refereeing process.
The first plenary talk was:
Why is the set of nosignalling distributions worth looking at? (That is, the set of conditional probability distributions that have welldefined marginals and .) One way to think about it is as a relaxation of the set of âquantumâ distributions, meaning the inputoutput distributions that are compatible with entangled states. The nosignalling polytope is defined by a polynomial number of linear constraints, and so is the sort of relaxation that is amenable to linear programming, whereas we donât even know whether the quantum value of a game is computable. But is the nosignalling condition ever interesting in itself?
Raz and his coauthors (Yael Kalai and Ron Rothblum) prove a major result (which weâll get to below) about the computational power of multiprover proof systems where the provers have access to arbitrary nonsignalling distributions. But they began by trying to prove an apparently unrelated classical crypto result. In general, multiple provers are stronger than one prover. Classically we have MIP=NEXP and IP=PSPACE, and in fact that MIP protocol just requires one round, whereas k rounds with a single prover is (roughly) within the kâth level of the polynomial hierarchy (i.e. even below PSPACE). So simulating many provers with one prover seems in general crazy.
But suppose instead the provers are computationally limited. Suppose they are strong enough for the problem to be interesting (i.e. they are much stronger than the verifier, so it is worthwhile for the verifier to delegate some nontrivial computation to them) but to weak to break some FHE (fully homomorphic encryption) scheme. This requires computational assumptions, but nothing too outlandish. Then the situation might be very different. If the verifier sends its queries using FHE, then one prover might simulate many provers without compromising security. This was the intuition of a paper from 2000, which Raz and coauthors finally are able to prove. The catch is that even though the single prover canât break the FHE, it can let its simulated provers play according to a nosignalling distribution. (Or at least this possibility cannot be ruled out.) So proving the security of 1prover delegated computation requires not only the computational assumptions used for FHE, but also a multiprover proof system that is secure against nosignalling distributions.
Via this route, Raz and coauthors found themselves in QIP territory. When they started it was known that
This work nails down the complexity of the manyprover setting, showing that EXP is contained in MIP_{ns}[poly provers], so that in fact that classes are equal.
It is a nice open question whether the same is true for a constant number of provers, say 3. By comparison, three entangled provers or two classical provers are strong enough to contain NEXP.
One beautiful consequence is that optimizing a linear function over the nosignalling polytope is roughly a Pcomplete problem. Previously it was known that linear programming was Pcomplete, meaning that it was unlikely to be solvable in, say, log space. But this work shows that this is true even if the constraints are fixed once and for all, and only the objective function is varied. (And we allow error.) This is established in a recent followup paper [ECCC TR14170] by two of the same authors.
A technical tourdeforce that we will not do justice to here. One intriguing barrierbreaking aspect of the work is that all previous algorithms for triangle finding worked equally well for the standard unweighted case as well as a weighted variant in which each edge is labeled by a number and the goal is to find a set of edges whose weights add up to a particular target. Indeed this algorithm has a query complexity for the unweighted case that is known to be impossible for the weighted version. A related point is that this shows the limitations of the otherwise versatile nonadaptive learninggraph method.
A classic problem: given for an unknown ddimensional state, estimate some property of . One problem where the answer is still shockingly unknown is to estimate in a way that achieves .
Results from compressed sensing show that for singlecopy twooutcome measurements of rank states with constant error, but if we allow block measurements then maybe we can do better. Perhaps is possible using using the Local Asymptotic Normality results of Guta and Kahn [0804.3876], as Hayashi has told me, but the details are – if we are feeling generous – still implicit. I hope that he, or somebody, works them out. (18 Jan update:Â thanks Ashley for fixingÂ a bug in an earlier version of this.)
The current talk focuses instead on properties of the spectrum, e.g. how many copies are needed to distinguish a maximally mixed state of rank from one of rank ? The symmetry of the problem (invariant under both permutations and rotations of the form ) means that we can WLOG consider âweak Schur samplingâ meaning that we measure which irrep our state lies in, and output some function of this result. This irrep is described by an integer partition which, when normalized, is a sort of mangled estimate of the spectrum. It remains only to analyze the accuracy of this estimator in various ways. In many of the interesting cases we can say something nontrivial even if . This involves some delicate calculations using a lot of symmetric polynomials. Some of these first steps (including many of the canonical ones worked out much earlier by people like Werner) are in my paper quantph/0609110 with Childs and Wocjan. But the current work goes far far beyond our old paper and introduces many new tools.
This talk considers a new model of query complexity inspired by the ElitzurVaidman bomb tester. The bomb tester is a classic demonstration of quantum weirdness: You have a collection of bombs that have a detonation device so sensitive that even a single photon impacting it will set it off. Some of these bombs are live and some are duds, and you’d like to know whichÂ is which. Classically, you don’t stand a chance, but quantum mechanically, you can putÂ a photon into a beamsplitter and place the bomb in one arm of a MachZender interferometer. A dud willÂ destroy the interference effects, and a homodyne detector will always click the same way. But you have a 50/50 chance of detecting a live bomb if the other detector clicks! There are various tricks that you can play related to the quantum Zeno effect that let you do much better than this 50% success probability.
The authors define a model of query complexity where one risks explosionÂ for some events, and they showed that the quantum query complexity is related to the bomb query complexity by . There were several other interesting results in this talk, but we ran out of steam as it was the last talk before lunch.
One unfortunate weakness of this work: The authors, although apparently knowledgeable about Galois theory, don’t seem to know about this link.
The unit group is a fundamental object in algebraic number theory. It comes up frequentlyÂ in applications as well,Â and is used forÂ fully homomorphic encryption, code obfuscation, and many other things.
My [Steve] personal way ofÂ understandingÂ the unit group of a number field is that it is a sort of gauge group with respect to the factoring problem. The units in a ring are those numbers with multiplicative inverses. In the ring of integers, where the units are just , we can factor composite numbers into . Both of these are equally valid factorizations;Â they are equivalent modulo units. In more complicated settings where unique factorization fails, we have factorization into prime ideals, and the group of units can in general become infinite (though always discrete).
The main result of this talk is a quantum algorithm for finding the unit group of a number field of arbitrary degree.Â One of the technical problems that they had to solve to get this result was to solve the hidden subgroup problem on a continuous group, namelyÂ .
The speaker also announced some work in progress: a quantum algorithmÂ forÂ the principal ideal problem and the class group problem in arbitrary degree number fields [Biasse Song â14]. It sounds like not all the details of this are finished yet.
Hamiltonian simulation is not only the original killer app of quantum computers, but also a key subroutine in a large and growing number of problems. I remember thinking it was pretty slick that higherorder TrotterSuzuki could achieve a runtime of where is the time we simulate the Hamiltonian for and is the sparsity. I also remember believing that the known optimality thoerems for TrotterSuzuki (sorry I canât find the reference, but it involves decomposing for the free Lie algebra generated by ) meant that this was essentially optimal.
Fortunately, Berry, Childs and Kothari (and in other work, Cleve) werenât so pessimistic, and have blasted past this implicit barrier. This work synthesizes everything that comes before to achieve a runtime of where (where is can be related to the earlier bounds via ).
One quote I liked: âbut this is just a generating function for Bessel functions!âÂ Miraculously, Dominic makes that sound encouraging. The lesson I suppose is to find an important problem (like Hamiltonian simulation) and to approach it with courage.
If you have some salt water with salt concentration 0.1% and some more with concentration 0.2%, then anything in the range [0.1, 0.2] is possible, but no amount of mixing will give you even a single drop with concentration 0.05% or 0.3%, even if you start with oceans at the initial concentrations. Similarly if Alice and Bob share an unlimited number of locally unbiased random bits with correlation they cannot produce even a single bit with correlation if they donât communicate. This was famously proved by Reingold, Vadhan and Wigderson.
This talk does the same thing for nosignaling boxes. Letâs just think about noisy PR boxes to make this concrete. The exciting thing about this work is that it doesnât just prove a nodistillation theorem but it defines an innovative new framework for doing so. The desired result feels like something from information theory, in that there is a monotonicity argument, but it needs to use quantities that do not increase with tensor product.
Here is one such quantity. Define the classical correlation measure where , and each have variance 1. Properties:
Together this shows that any wiring of boxes cannot increase this quantity.
The proof of this involves a more sophisticated correlation measure that is not just a single number but is a region called the hypercontractivity ribbon (originally due to [Ahlswede, Gacs â76]). This is defined to be the set of such that for any we have
A remarkable result of [Nair â14] is that this is equivalent to the condition that
for any extension of the distribution on AB to one on ABU.
Some properties.
For boxes define .Â The main theorem is then that rewiring never shrinks hypercontractivity ribbon. And as a result, PR box noise cannot be reduced.
These techniques are beautiful and seem as though they should have further application.
Your humble bloggers were at this point also facing an energy constraint which limited our ability to estimate what happened. The setting is that you pick a state, nature applies a unitary (specifically from a group representation) and then you pick a measurement and try to minimize the expected error in estimating the group element corresponding to what nature did. The upshot is that entanglement seems to give a quadratic improvement in metrology. Noise (generally) destroys this. This talk showed that a natural energy constraint on the input also destroys this. One interesting question from Andreas Winter was about what happens when energy constraints are applied also to the measurement, along the lines of 1211.2101 by Navascues and Popescu.
Jan 14 update: forgot one! Sorry Ashley.
Continuing the theme of producing shocking and sometimes superpolynomial speedups to averagecase problems, Ashley shows that finding a random pattern of length in a random text of length can be done in quantum time . Here “random” means something subtle. The text is uniformly random and the pattern is either uniformly random (in the “no” case) or is a random substring of the text (in the “yes” case). There is also a higherdimensional generalization of the result.
One exciting thing about this is that it is a fairly natural application of Kuperberg’s algorithm for the dihedralgroup HSP; in fact the first such application, although Kuperberg’s original paper does mention a much less natural such variant. (correction: not really the first – see Andrew’s comment below.)
It is interesting to think about this result in the context of the general question about quantum speedups for promise problems. It has long been known that query complexity cannot be improved by more than a polynomial (perhaps quadratic) factor for total functions. The dramatic speedups for things like the HSP, welded trees and even more contrived problems must then use the fact that they work for partial functions, and indeed even “structured” functions. Pattern matching is of course a total function, but not one that will ever be hard on average over a distribution with, say, i.i.d. inputs. Unless the pattern is somehow planted in the text, most distributions simply fail to match with overwhelming probability. It is funny that for i.i.d. bit strings this stops being true when , which is almost exactly when Ashley’s speedup becomes merely quadratic. So pattern matching is a total function whose hard distributions all look “partial” in some way, at least when quantum speedups are possible. This is somewhat vague, and it may be that some paper out there expresses the idea more clearly.
Part of the strength of this paper is then finding a problem where the promise is so natural. It gives me new hope for the future relevance of things like the HSP.
To everyone that’s attending QIP, welcome to Sydney!
Since I’ve already had to clarify a number of the finer points of Australian slang to my fellow attendees, I thought I would solve the general problem andÂ simply postÂ aÂ helpfulÂ dictionary that translates some uniquely AustralianÂ words and usages into standard American English.
Also, this thing on the right is called an ibis. It’s not venomous.
Flat whiteÂ –Â Try this at least once while you’re here, preferably prepared by a highly skilled barista at one of the better cafes. It’s similar to a latte or to a cappuccino without the foam, but there are important differences.
Long black – Australian version of the Americano, a bit stronger and with crema. It’sÂ the closest you’ll get to a cup of filtered drip coffee, if that’s your thing.
Short black – If you want a standard espresso, order a short black.
Thongs – Sandals, or flipflops. The highest level of dress code in Australia is “no thongs”.
Togs – Swimwear.
Esky – A cooler; the place where you store your beer to keep it cold while you’re getting pissed at the beach.
Pissed – Drunk;Â the state thatÂ a nontrivial fraction of people are in because it’s legal to drink atÂ the beach.
Sunnies – Sunglasses.
Mozzy – Mosquito. Usually not a problem at the beach becauseÂ there is almost always a breeze.
Schooner – (SKOOner) A mediumsized glass of beer.
Jug – A pitcher of beer.
Shout – To buy aÂ beer for someone, or a round of beers for your table.
Skol – To chug a beer. Usage: “Hey Robbo, if you skol that schooner I’ll shout you aÂ jug.”
Hotel – In addition to the standard meaning, a hotel is a particular style of pub. It usually has high occupancyÂ and aÂ limited beer selection (though this is starting to improve as craft beer isÂ finallyÂ catching on here).
Football – see “Footy”.
Footy – Rugby. It comes in several varieties, withÂ LeagueÂ andÂ Union being the two most popular varieties.
Gridiron – American football. Not generally watched much down under.
Cricket – An inscrutable game that takes 5 days to play. I think the only way you could like this game is to have the British invade, conquer your land, and occupy your territory under their colonial yoke for at least a few generations. That seems to be how everyone else got into it.
Rooting – Do not make the mistake of saying that you are “rooting for team X”; in Australia, rooting is slang for having sex.
Arvo – Afternoon.
Bickie – A cookie or biscuit.
Brekkie – Breakfast.
Fair dinkum – The closest translation is probably “for real”. It’s used to expressÂ the sentiment that you’re not deceiving the listener or exaggerating your claims.