Rogue Downloader Prosecuted; Rogue Banks Off the Hook

As a researcher, I can breathe safely posting my papers online, knowing that the federal government will do its best to stop anyone from downloading them in a way that violates a website’s terms of use. According to Wired,

Federal prosectors added nine new felony counts against well-known coder and activist Aaron Swartz, who was charged last year for allegedly breaching hacking laws by downloading millions of academic articles from a subscription database via an open connection at MIT.
Swartz, the 25-year-old executive director of Demand Progress, has a history of downloading massive data sets, both to use in research and to release public domain documents from behind paywalls. He surrendered in July 2011, remains free on bond and faces dozens of years in prison and a $1 million fine if convicted.

Thank goodness, because without the royalty checks from my articles, I’m not sure how I would be able to keep myself afloat.
I am also grateful that federal prosecutors have had the time they need to concentrate on important cases like these, instead of wasting time on small fry like Goldman Sachs.

Lucky 13 paper dance!

Having recently rediscovered arxiv.org/submit, I thought I’d mention a few papers to come out of team UW.
Two particularly exciting single-author papers are from students here.
Kamil Michnicki has developed a 3-d stabilizer code on $latex n$ qubits with an energy barrier that scales as $latex n^{2/9}$. By contrast, such a result is impossible in 2-d, and the best energy barrier previously obtained in 3-d was $latex O(log n)$ from Haah’s breakthrough cubic code. Sadly, this code appears not to have the thermal stability properties of the 4-d toric code, but it nevertheless is an exciting step towards a self-correcting quantum memory.

1208.3496: 3-d quantum stabilizer codes with a power law energy barrier
Kamil Michnicki

David Rosenbaum has written a mostly classical algorithms paper about the old problem of group isomorphism: given two groups $latex G,H$ specified by their multiplication tables, determine whether they are isomorphic. The problem reduces to graph isomorphism, but may be strictly easier. Since any group with $latex |G|=n$ has a generating set of size $latex leq log_2(n)$, it follows that the problem can be solved in time $latex n^{log(n)+O(1)}$. While faster algorithm have been given in many special cases, the trivial upper bound of $latex n^{log(n)}$ has resisted attack for decades. See Gödel’s Lost Letter for some discussion. The particular classes of groups considered to be hardest have been the nilpotent (or more generally solvable) groups, since paradoxically the rigidity of highly non-abelian groups (e.g. simple groups) makes them easier to address. David found a polynomial speedup for solvable groups, thus making the first progress on this problem since the initial $latex n^{log(n)}$ algorithms.

1205.0642: Breaking the $latex n^{log n}$ Barrier for Solvable-Group Isomorphism
David Rosenbaum

Lukas Svec (together with collaborators) also has a nice way of improving the Gottesman-Knill simulations that have been so effective in estimating FTQC thresholds. Gottesman-Knill allows mixtures of Clifford unitaries to be simulated classically, which seems as thought it should be only be effective for simulating unital noise. However, throwing away a qubit and replacing it with the $latex |0rangle$ state can also be encompassed within the Gottesman-Knill approach. This insight allows them to give much better simulations of amplitude-damping noise than any previous approach.

1207.0046: Approximation of real error channels by Clifford channels and Pauli measurements
Mauricio Gutiérrez, Lukas Svec, Alexander Vargo, Kenneth R. Brown

There have also been two papers about the possibilities of two dimensions. David has a paper explaining how a general circuit with arbitrary two-qubit interactions on $latex n$ qubits can be simulated in a 2-d architecture by using $latex n^2$ qubits. If only $latex k$ gates happen per time step, then $latex nk$ qubits suffice. The key trick is to use classical control to perform teleportation chains, an idea of whose provenance I’m unclear on, but which is based in part on MBQC and in part on a remarkable paper of Terhal and Divincenzo.

1205.0036: Optimal Quantum Circuits for Nearest-Neighbor Architectures
David Rosenbaum

Examining particular algorithms can enable more dramatic speedups, and by examining Shor’s algorithm, Paul Pham was able to reduce the depth to polylogarithmic, surprisingly finding an improved implementation of the most well-studied of quantum algorithms.

1207.6655: A 2D Nearest-Neighbor Quantum Architecture for Factoring
Paul Pham, Krysta M. Svore

David and I also have a joint paper on an alternate oracle model in which one input to the oracle is supplied by the user and a second input is random noise. While in some cases (e.g. a Grover oracle that misfires) this does not lead to quantum advantages, we find that in other cases, quantum computers can solve problems in a single query that classical computers cannot solve with unlimited queries. Along the way, we address the question of when some number of (quantum or classical) queries yield no useful information at all about the answer to an oracle problem.

1111.1462: Uselessness for an Oracle Model with Internal Randomness
David Rosenbaum, Aram W. Harrow

My co-blogger Steve has also been active. Steve and his co-authors (does that make them my step-co-authors?) have written perhaps the definitive work on how to estimate approximately low-rank density matrices using a small number of measurements.

1205.2300: Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators
Steven T. Flammia, David Gross, Yi-Kai Liu, Jens Eisert

Steve, together with Ghost Pontiff Dave and Dave’s former student Gregory Crosswhite have also posted

1207.2769: Adiabatic Quantum Transistors
Dave Bacon, Steven T. Flammia, Gregory M. Crosswhite

This paper proposes a deeply innovative approach to quantum computing, in which one adiabatically transforms one simple spatially-local Hamiltonian to another. Unlike previous approaches, it seems to have a chance of having some compelling fault-tolerance properties, although analyzing this remains challenging.
Steve and I also have a brief note (arXiv:1204.3404) relevant to my ongoing debate with Gil Kalai (see here, here, here, here, here, here or here) in which we point out counterexamples to one of Gil’s conjectures. This post specifically contains more discussion of the issue.
Finally, I’ve been clearing out a lot of my unpublished backlog this year.
My co-authors and I wrote a short paper explaining the main ideas behind the superactivation of zero-error capacity. The principle is similar to that found in all of the additivity violations based on random quantum channels: we choose a correlated distribution over channels $latex {cal N}_1, {cal N}_2 $ in a way that forces $latex {cal N}_1otimes {cal N}_2$ to have some desired behavior (e.g. when acting on a particular maximally entangled state). At the same time, apart from this constraint, the distribution is as random as possible. Hopefully we can then show that any single-copy use of $latex {cal N}_1$ or $latex {cal N}_2$ has low capacity, or in our case, zero zero-error capacity. In our case, there are a few twists, since we are talking about zero-error capacity, which is a fragile property more suited to algebraic geometry than the usual approximate techniques in information theory. On the other hand, this means that at many points we can show that properties hold with probability 1. The other nontrivial twist is that we have to show that not only $latex {cal N}_i$ has zero zero-error capacity (yeah, I know it’s a ridiculous expression) but $latex {cal N}_i^{otimes n}$ does for all $latex n$. This can be done with some more algebraic geometry (which is a fancy way of saying that the simultaneous zeroes of a set of polynomials has measure equal to either 0 or 1) as well as the fact that the property of being an unextendible product bases is stable under tensor product.

1109.0540:Entanglement can completely defeat quantum noise
Jianxin Chen, Toby S. Cubitt, Aram W. Harrow, Graeme Smith

One paper that was a fun bridge-building exercise (with a nice shout out from Umesh Vazirani/BILL GASARCH) was a project with quantum information superstar Fernando Brandão as well as a pack of classical CS theorists. My part of the paper involved connections between QMA(2) and optimizing polynomials over $latex mathbb{R}^n$. For example, if $latex a_1,ldots, a_m$ are the rows of a matrix $latex A$, then define $latex |A|_{2rightarrow 4} = max_{|x|_2=1} |Ax|_4 = max_{|x|_2=1} (sum_{i=1}^m |langle a_i, xrangle|^4)^{1/4}$. Taking the fourth power we obtain the maximum energy attainable by product states under the Hamiltonian $latex H = sum_{i=1}^m a_i a_i^* otimes a_i a_i^*$. Thus, hardness results and algorithms can be ported in both directions. One natural algorithm is called “the Lasserre SDP hierarchy” classically and “optimizing over $latex k$-extendable states” quantumly, but in fact these are essentially the same thing (an observation dating back to a 2003 paper of Doherty, Parrilo, and Spedalieri). There is much more to the paper, but I’ll leave it at that for now.

1205.4484: Hypercontractivity, Sum-of-Squares Proofs, and their Applications
Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan A. Kelner, David Steurer, Yuan Zhou

Another big collaboration taking me out of my usual areas was this paper on quantum architecture. Suppose that our quantum computer is comprised of many small nodes (say ion traps), connected by long-range links (say optical fibers), as has been recently advocated. This computer would not be stuck with a 2-d topology, but could be connected in any reasonably low-degree configuration. Our paper shows that a hypercube topology (among many other possibilities) is enough to simulate general quantum circuits. This enables parallelized versions of Grover search that finally (in my opinion) address the problem raised by Grover and Rudolph about the memory requirements for the “best” known collision and element-distinctness algorithms. As a result, we find space-time tradeoffs (assuming this hypercube topology) for collision and element distinctness of $latex ST=tilde O(sqrt N)$ and $latex ST=tilde O(N)$ respectively.

1207.2307: Efficient Distributed Quantum Computing
Robert Beals, Stephen Brierley, Oliver Gray, Aram W. Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, Mark Stather

The next paper was on more familiar territory. Together with my former PhD student Richard Low (and building on the work of Dahlsten, Oliveira and Plenio), I proved that random quantum circuits are approximate unitary 2-designs in 0802.1919. Later Fernando Brandão and Michal Horodecki improved this to show that random quantum circuits are approximate unitary 3-designs in 1010.3654 (achieving a sweet oracle speedup in the process). Teaming up with them I expected maybe to reach 5-designs, but in the end we were able to get arbitrary $latex k$-designs on $latex n$ qubits with circuits of length $latex {rm poly}(n,k)$.

1208.0692 Local random quantum circuits are approximate polynomial-designs
Fernando G. S. L. Brandão, Aram W. Harrow, Michal Horodecki

Finally, one more paper came out of my classical CS dabbling. Together with classical (but quantum curious) theorists Alexandra Kolla and Leonard Schulman, we found a cute combinatorial result in our failed bid to refute the unique games conjecture on the hypercube. Our result concerns what are called maximal functions. Hardy and Littlewood introduced these by talking about cricket; here is a contemporary version. Imagine that you are a Red Sox fan whose happiness at any given time depends on what fraction of the last $latex n$ games the Red Sox have won against the Yankees. Fortunately, you are willing to choose $latex n$ differently from day to day in order to maximize your happiness. For example, if the Red Sox won the last game, you take $latex n=1$ and your happiness is 100%. If they won 3 out of the last 5 games, you could take $latex n=5$ and obtain happiness 60%. The maximal operator takes the win-loss series and transforms it into the happiness function. (A similar principle is at work when the loser of rock-paper-scissors proposes best 3-out-of-5, and then best 4-out-of-7, etc.) In our paper, we bound how much the maximal operator can increase the 2-norm of a function when it acts on functions on the hypercube, and the maximization is taken not over intervals, but over Hamming spheres in the hypercube. Along the way, we prove some bounds on Krawtchouk polynomials that seem so simple and useful I feel we must have overlooked some existing paper that already proves them.

1209.4148: Dimension-free L2 maximal inequality for spherical means in the hypercube
Aram W. Harrow, Alexandra Kolla, Leonard J. Schulman

Funding boost for the arXiv

This is fantastic news: starting this January, the Simons Foundation will provide the Cornell University Library with up to US $300k per year (for the next five years) of matching funds to help ensure the continued sustainability of arXiv.org. The funds are matched to donations by about 120 institutions in a dozen countries that are well funded and are heavy downloaders of articles from the arXiv. It is also providing an unconditional gift of $50k per year. Here’s the press release from the CUL.
I think it is pretty remarkable how an institution like the arXiv, which every reader of this blog will agree is absolutely indispensable for research, has struggled to make ends meet. This is especially true given that the amount of money it takes to keep it going is really just a drop in the bucket compared to other spending. Look at some of the numbers: in the last year alone, there were more than  50 million downloads worldwide and more than 76,000 articles submitted. To have open access to that kind of information for a total cost of about $1m per year? Priceless.

Instruments for Natural Philosophy

Bernard H. Porter's 1939 Map of PhysicsThomas B. Greenslade, emeritus physics professor at Kenyon College in Gambier, Ohio, USA, has amassed (both physically and virtually) a fascinating and expertly curated collection of old physics teaching equipment.   Among my favorite items are Bernard H. Porter’s 1939 map depicting Physics as a continent, with rivers corresponding to its principal branches.

Physics World gets high on Tel Aviv catnip

It should be no surprise that loose talk by scientists on tantalizing subjects like backward causality can impair the judgment of science writers working on a short deadline.  A recent paper by Aharonov, Cohen, Grossman and Elitzur at Tel Aviv University,  provocatively titled “Can a Future Choice Affect a Past Measurement’s Outcome?” so intoxicated Philip Ball, writing in Physics World,  that a casual reader of his piece would likely conclude  that  the answer was  “Yes!  But no one quite understands how it works,  and it probably has something to do with free will.”  A more sober reading of the original paper’s substantive content would be

  •  As John Bell showed in 1964, quantum systems’ behavior cannot be explained by local hidden variable models of the usual sort, wherein each particle carries information determining the result of any measurement that might be performed on it.
  • The Two State-Vector Formalism (TSFV) for quantum mechanics,  although equivalent in its predictions to ordinary nonlocal quantum theory, can be viewed as a more complicated kind of local hidden variable model, one that, by depending on a final as well as an initial condition, and being local in space-time rather than space, escapes Bell’s prohibition .

This incident illustrates two unfortunate tendencies in quantum foundations research:

  • Many in the field believe in their own formulation or interpretation of quantum mechanics so fervently that they give short shrift to other formulations, rather than treating them more charitably, as complementary approaches to understanding a simple but hard-to-intuit reality.
  • There is a long history of trying to fit the phenomenology of entanglement into inappropriate everyday language, like Einstein’s “spooky action at a distance”.

Surely the worst mistake of this sort was Herbert’s 1981 “FLASH” proposal to use entanglement for superluminal signaling, whose refutation may have hastened the discovery of the no-cloning theorem.  The Tel Aviv authors would never make such a crude mistake—indeed, far from arguing that superluminal signalling is possible, they use it as a straw man for their formalism to demolish.   But unfortunately, to make their straw man look stronger before demolishing him, they use language that, like Einstein’s,  obscures the crucial difference between communication and correlation.  They say that the initial (weak) measurement outcomes “anticipate the experimenter’s future choice” but that doing so causes no violation of causality because the “anticipation is encrypted”.  This is as wrongheaded as saying that when Alice sends Bob a classical secret key, intending to use it for one-time-pad encryption,  that the key is already an encrypted anticipation of  whatever message she might later send with it.   Or to take a more quantum example, it’s like saying that half of any maximally entangled  pair of qubits is already an encrypted anticipation of whatever quantum state might later be teleported through it.
Near the end of their paper the Tel Aviv group hits another hot button,  coyly suggesting that encrypted anticipation may help explain free will, by giving humans “full freedom from both past and future constraints.”  The issue of free will appeared also in Ball’s piece (following a brief but fair summary of my critique) as a quote attributing to Yakir Aharonov, the senior Tel Aviv author, the opinion that  humans have free will even though God knows exactly what they will do.
The authors, and reviewer, would have served their readers better by eschewing  the “concept” of  encrypted anticipation and instead concentrating on how TSVF makes a local picture of quantum evolution possible.  In particular they could have compared TSVF with another attempt to give orthodox quantum mechanics a fully local explanation,  Deutsch and Hayden’s 1999 information flow formalism.
 

Programs announced for AQIS'12 and Satellite Workshop

The accepted papers  and schedule for the twelfth Asian Quantum Information Science conference, in Suzhou, China 23-26 August, have been announced.  It will be followed by a satellite workshop  27-28 August on quantum Shannon theory in nearby Shanghai.   Unfortunately we probably won’t be live-blogging these events, due to most of the pontiffs being elsewhere.

I have a friend in Minsk…

Unlike the usual university plagiarism policy which is typically something bland copied from another university, the University of Bergen has explained the problem, and consequences, of plagiarism with Hollywood-level production values.

(Be sure to click on “CC” for the English subtitles.)
I came across the video from the blog of Andrew Gelman, who has long chronicled plagiarism and other forms of scientific misconduct. From his post, I also learned about my new favorite plagiarism story.

good news for quantum computing?

Once large-scale quantum computers exist and can break RSA, of course everyone will switch to a quantum-resistant cryptosystem. Sure, the work that classical computers do to encode and decode will be polynomially higher, but on the other hand, all of the code-breaking advantages of quantum computers will be nullified, and we’ll be stuck trying to explain to the NSA why they should fund simulations of benzene.
Or will we?
One thing we’ve learned from the (semi-)recent hack of stratfor.com (a “subscription-based provider of geopolitical analysis”) is that even though the importance of salts has been known since at least the days of the Enigma machine, it doesn’t mean people will actually use best practices, and stop storing things like credit card numbers in the clear.
So don’t worry: even after “post-quantum” crypto has been developed, and large-scale quantum computers are common, there should be plenty of RSA around to crack.

Alexei Kitaev wins Fundamental Physics Prize

Alexei Kitaev was just named as a co-recipient of the inaugural Fundamental Physics Prize, along with 8 other distinguished physicists. This is a brand new prize which was started by a Russian billionaire named Yuri Milner. More at the New York Times. Alexei is credited in the citation as follows:

For the theoretical idea of implementing robust quantum memories and fault-tolerant quantum computation using topological quantum phases with anyons and unpaired Majorana modes.

This is without question a well-deserved award. I know I’m not the only one who has said, only half joking, that I work on “Kitaev Theory”. A hearty congratulations to Alexei.
There’s more to this story, though! Since the NYT highlights it, there is no dancing around the fact that this is the largest monetary prize in the history of physics: US $3 million! And with big money comes big controversy. From the NYT, my emphasis:

Unlike the Nobel in physics, the Fundamental Physics Prize can be awarded to scientists whose ideas have not yet been verified by experiments, which often occurs decades later. Sometimes a radical new idea “really deserves recognition right away because it expands our understanding of at least what is possible,” Mr. Milner said.
Dr. Arkani-Hamed, for example, has worked on theories about the origin of the Higgs boson, the particle recently discovered at the Large Hadron Collider in Switzerland, and about how that collider could discover new dimensions. None of his theories have been proven yet. He said several were “under strain” because of the new data.

Given that this is worth more than the Nobel prize (“only” $1.2 million, usually shared by 3 people) what sort of incentives does this set up? I’m glad that Mr. Milner’s award will go to researchers with breakthrough ideas, but sometimes great ideas don’t agree with Nature! They will have other value, for sure, but is it too risky to reward “radical ideas” over correct ideas? Mr. Milner, who made his fortune investing in internet companies and knows a thing or two about risk, apparently doesn’t think so.
Update: Given that 5 of the recipients were string theorists, it is unsurprising that Peter Woit got there first to add some fuel to the fire.

This post was supported by Goldman-Sachs Grant No. GS98039

After my earlier post about the defense budget, I thought it might be nice if there were some other similar-sized revenue streams that we could tap into other than DoD funding.  It got me thinking… who has the most money? Governments aside (which already have schemes for funding science), it has to be large corporations and big investment banks.
While some large corporations have R & D divisions (e.g. the quantum group at IBM), I’m not aware of any investment bank that has one, despite the large number of physicists, mathematicians and computer scientists that they employ. Could we possibly get a bank to directly fund scientific research? After all, what is the entire NSF budget of $7 billion to a big investment bank? A JP Morgan executive loses that kind of money in the cushions of his couch.
Here is something that could possibly entice one of these entities to invest in physics: using neutrinos to do high-frequency trading. While all those other suckers are busy sending signals overland via satellites and fiber optics, you just take a short cut with a neutrino beam straight through the center of the earth!  My back-of-the-envelope calculation suggests an 18 ms difference to send a signal through the Earth from NYC to Shanghai rather than over the surface. You could make the trade and still have time to enjoy a quick blink afterward.
In fact, a group of physicists at Fermilab have recently done an experiment (arXiv) that demonstrated using a neutrino beam to (classically 🙂 ) communicate through the Earth. The bit rate was low, only .1 bits per second, and the distance was only 240m. I’m sure one of the milestones on their Goldman-Sachs grant is to get that up to 1bps and 1km before the program review.