MIT + postdoc opening

After 8 years at MIT, and a little over 7 years away, I will return to MIT this January to join the Physics faculty. I love the crazy nerdy atmosphere at MIT, and am really excited about coming back (although I’ll miss the wonderful people at UW).

MIT Center for Theoretical Physics

A typical MIT student.

 
Even if Cambridge doesn’t quite look like Sydney, it’s a magical place.

Combined with my fellow CTPers Eddie Farhi and Peter Shor, I will also have funding to hire a postdoc there. The other faculty doing the theory side of quantum information at MIT include Scott Aaronson, Ike Chuang, Jeffrey Goldstone and Seth Lloyd; and that’s not even getting into the experimentalists (like Orlando and Shapiro), or the math and CS people who sometimes think about quantum (like Kelner and Sipser).

The application deadline, including letters, is December 1.

You can apply here. Note that when it asks for “three senior physicists”, you should interpret that as “three strong researchers in quantum information, preferably senior and/or well-known.”

QIP 2013 accepted talks, travel grants

The accepted talk list for QIP 2013 is now online. Thomas Vidick has done a great job of breaking the list into categories and posting links to papers: see here. He missed only one that I’m aware of: Kamil Michnicki’s paper on stabilizer codes with power law energy barriers is indeed online at arXiv:1208.3496. Here are Thomas’s categories, together with the number of talks in each category.

  • Ground states of local Hamiltonians (4)
  • Cryptography (3)
  • Nonlocality (6)
  • Topological computing and error-correcting codes (4)
  • Algorithms and query complexity (6)
  • Information Theory (9)
  • Complexity (2)
  • Thermodynamics (2)

Other categorizations are also possible, and one important emerging trend (for some years now) is the way that the “information theory” has broadened far beyond quantum Shannon theory. To indulge in a little self-promotion, my paper 1210.6367 with Brandao is an example of how information-theoretic tools can be usefully applied to many contexts that do not involve sending messages at optimal rates.
It would be fascinating to see how these categories have evolved over the years. A cynic might say that our community is fad-driven, but instead I that the evolution in QIP topics represents our field working to find its identity and relevance.
On another note, travel grants are available to students and postdocs who want to attend QIP, thanks to funding from the NSF and other organizations. You don’t have to have a talk there, but being an active researcher in quantum information is a plus. Beware that the deadline is November 15 and this is also the deadline for reference letters from advisors.
So apply now!

Victory!

On this historic occasion, let me take this opportunity to congratulate tonight’s winner: Nate Silver, of the Mathematics Party. No words yet on a concession speech from the opposition.
Fans will be pleased to know that Nate is now on twitter.

21 = 3 * 7

…with high probability.
That joke was recycled, and in recent work, Bristol experimentalists have found a way to recycle photonic qubits to perform Shor’s algorithm using only lceil log Nrceil + 1 qubits. (Ok, so actually is was Kitaev who figured this out, as they are basically using Kitaev’s semi-classical QFT. But they found a way to do this with photons.) Further reductions on the number of qubits can be achieved by “compiling” to throw out the unused ones. In this case, we are left with a qubit and a qutrit. The feedback of the measurement is also replaced with the usual postselection. However, buried in the body of the paper is the main advance:

Our scheme therefore requires two consecutive photonic CNOT gates–something that has not previously been demonstrated–acting on qubit subspaces of the qutrit.

This is important progress, but until the factored numbers get a lot larger, the “size of number factored” yardstick is going to be a frustrating one. For example, how is it that factoring 143 on an NMR quantum computer takes only 4 qubits while factoring 15 takes 7?
Anyone have ideas for something more objective, like a quantum analogue of MIPS? (Incidentally, apparently MIPS is obsolete because of memory hierarchies.) Perhaps Grover’s algorithm would be good, as “compiling” is harder to justify, and noise is harder to survive.
Update:
In the comments, Tom Lawson points out the real significance of factoring 21. This is the smallest example of factoring in which the desired distribution is not uniformly random, thus making it possible to distinguish the successful execution of an experiment from the effects of noise.

Rogue Downloader Prosecuted; Rogue Banks Off the Hook

As a researcher, I can breathe safely posting my papers online, knowing that the federal government will do its best to stop anyone from downloading them in a way that violates a website’s terms of use. According to Wired,

Federal prosectors added nine new felony counts against well-known coder and activist Aaron Swartz, who was charged last year for allegedly breaching hacking laws by downloading millions of academic articles from a subscription database via an open connection at MIT.
Swartz, the 25-year-old executive director of Demand Progress, has a history of downloading massive data sets, both to use in research and to release public domain documents from behind paywalls. He surrendered in July 2011, remains free on bond and faces dozens of years in prison and a $1 million fine if convicted.

Thank goodness, because without the royalty checks from my articles, I’m not sure how I would be able to keep myself afloat.
I am also grateful that federal prosecutors have had the time they need to concentrate on important cases like these, instead of wasting time on small fry like Goldman Sachs.

Lucky 13 paper dance!

Having recently rediscovered arxiv.org/submit, I thought I’d mention a few papers to come out of team UW.
Two particularly exciting single-author papers are from students here.
Kamil Michnicki has developed a 3-d stabilizer code on n qubits with an energy barrier that scales as n^{2/9}. By contrast, such a result is impossible in 2-d, and the best energy barrier previously obtained in 3-d was O(log n) from Haah’s breakthrough cubic code. Sadly, this code appears not to have the thermal stability properties of the 4-d toric code, but it nevertheless is an exciting step towards a self-correcting quantum memory.

1208.3496: 3-d quantum stabilizer codes with a power law energy barrier
Kamil Michnicki

David Rosenbaum has written a mostly classical algorithms paper about the old problem of group isomorphism: given two groups G,H specified by their multiplication tables, determine whether they are isomorphic. The problem reduces to graph isomorphism, but may be strictly easier. Since any group with |G|=n has a generating set of size leq log_2(n), it follows that the problem can be solved in time n^{log(n)+O(1)}. While faster algorithm have been given in many special cases, the trivial upper bound of n^{log(n)} has resisted attack for decades. See Gödel’s Lost Letter for some discussion. The particular classes of groups considered to be hardest have been the nilpotent (or more generally solvable) groups, since paradoxically the rigidity of highly non-abelian groups (e.g. simple groups) makes them easier to address. David found a polynomial speedup for solvable groups, thus making the first progress on this problem since the initial n^{log(n)} algorithms.

1205.0642: Breaking the n^{log n} Barrier for Solvable-Group Isomorphism
David Rosenbaum

Lukas Svec (together with collaborators) also has a nice way of improving the Gottesman-Knill simulations that have been so effective in estimating FTQC thresholds. Gottesman-Knill allows mixtures of Clifford unitaries to be simulated classically, which seems as thought it should be only be effective for simulating unital noise. However, throwing away a qubit and replacing it with the |0rangle state can also be encompassed within the Gottesman-Knill approach. This insight allows them to give much better simulations of amplitude-damping noise than any previous approach.

1207.0046: Approximation of real error channels by Clifford channels and Pauli measurements
Mauricio Gutiérrez, Lukas Svec, Alexander Vargo, Kenneth R. Brown

There have also been two papers about the possibilities of two dimensions. David has a paper explaining how a general circuit with arbitrary two-qubit interactions on n qubits can be simulated in a 2-d architecture by using n^2 qubits. If only k gates happen per time step, then nk qubits suffice. The key trick is to use classical control to perform teleportation chains, an idea of whose provenance I’m unclear on, but which is based in part on MBQC and in part on a remarkable paper of Terhal and Divincenzo.

1205.0036: Optimal Quantum Circuits for Nearest-Neighbor Architectures
David Rosenbaum

Examining particular algorithms can enable more dramatic speedups, and by examining Shor’s algorithm, Paul Pham was able to reduce the depth to polylogarithmic, surprisingly finding an improved implementation of the most well-studied of quantum algorithms.

1207.6655: A 2D Nearest-Neighbor Quantum Architecture for Factoring
Paul Pham, Krysta M. Svore

David and I also have a joint paper on an alternate oracle model in which one input to the oracle is supplied by the user and a second input is random noise. While in some cases (e.g. a Grover oracle that misfires) this does not lead to quantum advantages, we find that in other cases, quantum computers can solve problems in a single query that classical computers cannot solve with unlimited queries. Along the way, we address the question of when some number of (quantum or classical) queries yield no useful information at all about the answer to an oracle problem.

1111.1462: Uselessness for an Oracle Model with Internal Randomness
David Rosenbaum, Aram W. Harrow

My co-blogger Steve has also been active. Steve and his co-authors (does that make them my step-co-authors?) have written perhaps the definitive work on how to estimate approximately low-rank density matrices using a small number of measurements.

1205.2300: Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators
Steven T. Flammia, David Gross, Yi-Kai Liu, Jens Eisert

Steve, together with Ghost Pontiff Dave and Dave’s former student Gregory Crosswhite have also posted

1207.2769: Adiabatic Quantum Transistors
Dave Bacon, Steven T. Flammia, Gregory M. Crosswhite

This paper proposes a deeply innovative approach to quantum computing, in which one adiabatically transforms one simple spatially-local Hamiltonian to another. Unlike previous approaches, it seems to have a chance of having some compelling fault-tolerance properties, although analyzing this remains challenging.
Steve and I also have a brief note (arXiv:1204.3404) relevant to my ongoing debate with Gil Kalai (see here, here, here, here, here, here or here) in which we point out counterexamples to one of Gil’s conjectures. This post specifically contains more discussion of the issue.
Finally, I’ve been clearing out a lot of my unpublished backlog this year.
My co-authors and I wrote a short paper explaining the main ideas behind the superactivation of zero-error capacity. The principle is similar to that found in all of the additivity violations based on random quantum channels: we choose a correlated distribution over channels {cal N}_1, {cal N}_2 in a way that forces {cal N}_1otimes {cal N}_2 to have some desired behavior (e.g. when acting on a particular maximally entangled state). At the same time, apart from this constraint, the distribution is as random as possible. Hopefully we can then show that any single-copy use of {cal N}_1 or {cal N}_2 has low capacity, or in our case, zero zero-error capacity. In our case, there are a few twists, since we are talking about zero-error capacity, which is a fragile property more suited to algebraic geometry than the usual approximate techniques in information theory. On the other hand, this means that at many points we can show that properties hold with probability 1. The other nontrivial twist is that we have to show that not only {cal N}_i has zero zero-error capacity (yeah, I know it’s a ridiculous expression) but {cal N}_i^{otimes n} does for all n. This can be done with some more algebraic geometry (which is a fancy way of saying that the simultaneous zeroes of a set of polynomials has measure equal to either 0 or 1) as well as the fact that the property of being an unextendible product bases is stable under tensor product.

1109.0540:Entanglement can completely defeat quantum noise
Jianxin Chen, Toby S. Cubitt, Aram W. Harrow, Graeme Smith

One paper that was a fun bridge-building exercise (with a nice shout out from Umesh Vazirani/BILL GASARCH) was a project with quantum information superstar Fernando Brandão as well as a pack of classical CS theorists. My part of the paper involved connections between QMA(2) and optimizing polynomials over mathbb{R}^n. For example, if a_1,ldots, a_m are the rows of a matrix A, then define |A|_{2rightarrow 4} = max_{|x|_2=1} |Ax|_4 = max_{|x|_2=1} (sum_{i=1}^m |langle a_i, xrangle|^4)^{1/4}. Taking the fourth power we obtain the maximum energy attainable by product states under the Hamiltonian H = sum_{i=1}^m a_i a_i^* otimes a_i a_i^*. Thus, hardness results and algorithms can be ported in both directions. One natural algorithm is called “the Lasserre SDP hierarchy” classically and “optimizing over k-extendable states” quantumly, but in fact these are essentially the same thing (an observation dating back to a 2003 paper of Doherty, Parrilo, and Spedalieri). There is much more to the paper, but I’ll leave it at that for now.

1205.4484: Hypercontractivity, Sum-of-Squares Proofs, and their Applications
Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan A. Kelner, David Steurer, Yuan Zhou

Another big collaboration taking me out of my usual areas was this paper on quantum architecture. Suppose that our quantum computer is comprised of many small nodes (say ion traps), connected by long-range links (say optical fibers), as has been recently advocated. This computer would not be stuck with a 2-d topology, but could be connected in any reasonably low-degree configuration. Our paper shows that a hypercube topology (among many other possibilities) is enough to simulate general quantum circuits. This enables parallelized versions of Grover search that finally (in my opinion) address the problem raised by Grover and Rudolph about the memory requirements for the “best” known collision and element-distinctness algorithms. As a result, we find space-time tradeoffs (assuming this hypercube topology) for collision and element distinctness of ST=tilde O(sqrt N) and ST=tilde O(N) respectively.

1207.2307: Efficient Distributed Quantum Computing
Robert Beals, Stephen Brierley, Oliver Gray, Aram W. Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, Mark Stather

The next paper was on more familiar territory. Together with my former PhD student Richard Low (and building on the work of Dahlsten, Oliveira and Plenio), I proved that random quantum circuits are approximate unitary 2-designs in 0802.1919. Later Fernando Brandão and Michal Horodecki improved this to show that random quantum circuits are approximate unitary 3-designs in 1010.3654 (achieving a sweet oracle speedup in the process). Teaming up with them I expected maybe to reach 5-designs, but in the end we were able to get arbitrary k-designs on n qubits with circuits of length {rm poly}(n,k).

1208.0692 Local random quantum circuits are approximate polynomial-designs
Fernando G. S. L. Brandão, Aram W. Harrow, Michal Horodecki

Finally, one more paper came out of my classical CS dabbling. Together with classical (but quantum curious) theorists Alexandra Kolla and Leonard Schulman, we found a cute combinatorial result in our failed bid to refute the unique games conjecture on the hypercube. Our result concerns what are called maximal functions. Hardy and Littlewood introduced these by talking about cricket; here is a contemporary version. Imagine that you are a Red Sox fan whose happiness at any given time depends on what fraction of the last n games the Red Sox have won against the Yankees. Fortunately, you are willing to choose n differently from day to day in order to maximize your happiness. For example, if the Red Sox won the last game, you take n=1 and your happiness is 100%. If they won 3 out of the last 5 games, you could take n=5 and obtain happiness 60%. The maximal operator takes the win-loss series and transforms it into the happiness function. (A similar principle is at work when the loser of rock-paper-scissors proposes best 3-out-of-5, and then best 4-out-of-7, etc.) In our paper, we bound how much the maximal operator can increase the 2-norm of a function when it acts on functions on the hypercube, and the maximization is taken not over intervals, but over Hamming spheres in the hypercube. Along the way, we prove some bounds on Krawtchouk polynomials that seem so simple and useful I feel we must have overlooked some existing paper that already proves them.

1209.4148: Dimension-free L2 maximal inequality for spherical means in the hypercube
Aram W. Harrow, Alexandra Kolla, Leonard J. Schulman

I have a friend in Minsk…

Unlike the usual university plagiarism policy which is typically something bland copied from another university, the University of Bergen has explained the problem, and consequences, of plagiarism with Hollywood-level production values.

(Be sure to click on “CC” for the English subtitles.)
I came across the video from the blog of Andrew Gelman, who has long chronicled plagiarism and other forms of scientific misconduct. From his post, I also learned about my new favorite plagiarism story.

good news for quantum computing?

Once large-scale quantum computers exist and can break RSA, of course everyone will switch to a quantum-resistant cryptosystem. Sure, the work that classical computers do to encode and decode will be polynomially higher, but on the other hand, all of the code-breaking advantages of quantum computers will be nullified, and we’ll be stuck trying to explain to the NSA why they should fund simulations of benzene.
Or will we?
One thing we’ve learned from the (semi-)recent hack of stratfor.com (a “subscription-based provider of geopolitical analysis”) is that even though the importance of salts has been known since at least the days of the Enigma machine, it doesn’t mean people will actually use best practices, and stop storing things like credit card numbers in the clear.
So don’t worry: even after “post-quantum” crypto has been developed, and large-scale quantum computers are common, there should be plenty of RSA around to crack.

QIP 2013

QIP is in Beijing this time from Jan 21-25, 2013. And the call for papers has just been posted.

  • Submission deadline for talks: October 5, 2012
  • Notification for talk submissions: November 9, 2012
  • Submission deadline for posters: November 16, 2012
  • Notification for poster submissions: November 26, 2012

The submission guidelines continue to slowly evolve. Last year called for “2-3 pages in a reasonable font” and this year is a somewhat more precise “up to 3 pages in 11pt font with reasonable margins, not including references”. From my experience on PCs in the past (I’m not on the PC this year, though), I’ll mention that these short submissions sometimes are very good, but sometimes aren’t addressed at the key questions that the PC will want to consider, such as:

  • What is the key result?
  • How does it compare with previous work on this subject? And how does it fit into the broader context of other research?
  • What are the technical innovations necessary to achieve it, and possibly to overcome obstacles that stopped people before?
  • (If the topic area is unconventional, or the submission is about a newly invented problem.) Why is this a good area for people to study?

These are kind of basic, but it’s surprising how many good results are undermined by not explaining these points well.
Another feature of the CFP to notice is that authors are strongly recommended to link to a version of their paper on their arxiv, or to include a long version of their paper as an attachment. The idea here is that if a result is not ready to go on the arxiv, then it’s less likely to be correct. There was debate about making it mandatory to post a full version on the arxiv, but this idea was dropped because many people felt that this way we might miss out on the freshest results.
A related concern is about keeping the barriers to submitting to QIP low. Of course, travel is always a barrier, but hopefully we shouldn’t waste a lot of people’s time with elaborate submission requirements if we expect the acceptance rate to be around 20%. I think the current requirements are reasonable, as long as people don’t stress too much about making a special short version, e.g. by writing something informal about why the paper should be accepted and referring liberally to the long version instead of trying to pack as much as possible in three pages. But I can imagine other systems as well. What do you think about the way QIP is being run right now?

QIP 2012 videos!

The long awaited videos from QIP 2012 are now online.
Somewhat. If you see a video that you want that isn’t online, then tell the speaker to sign the release form! I’m looking at you Bravyi, Greiner, Landahl and Popescu! (And others.)
If you haven’t seen them before, then you should check out the talks by Haah (on 3-d local stabilizer codes) and Arad (on an improved 1-d area law).