Pop goes the discord bubble

Well, the rat is out of the bag; Schroedinger’s Rat, that is. That’s the new quantum blog by Miguel Navascues and boy is it snarky! I was keeping it under my hat for a while so I could enjoy it privately, but the time has come to announce it to the world. Miguel is fearless about shouting his colorful opinions from the rooftops, and I can respect that, even if I don’t agree with everything he says.
Miguel’s second post is all about quantum discord. As anyone who reads quant-ph knows, there are around two or three papers per week about discord for years now. Unfortunately, a huge number of these are nearly worthless! Quoth the Rat:

The quantum discord of a bipartite state was first defined by Ollivier and Zurek as the difference between its original quantum mutual information and the same quantity after we perform a rank-1 projective measurement on one part.
Now, what does that mean? Probably, nothing. But lack of motivation has never prevented investigation at international scale. And so we ended up with one more research topic that clearly goes nowhere, in the line of entanglement sudden death…

If you thought he was exaggerating, here is a paper that has been cited 263 times since May 2009: Robustness of quantum discord to sudden death. Wow, discord is immortal! And in case you want more, you can just go to quantumdiscord.org and find a list of discord papers and see for yourself.
Before we can treat the patient we have to understand the disease, and this is exemplified by a typically test case: Quantum discord for two-qubit X-states (cited 256 times). The authors compute the quantum discord and a few other entanglement measures for a family of two-qubit states and conclude that there is no obvious relationship between the various measures. Why did they do this? “Because it’s there” might have been a good reason to scale Everest, but this feels more like a homework assignment for a graduate course. Wait, I take that back: Scott’s students’ homework is much more interesting and relevant.
But Steve! What about all the good discord papers? You can’t just trash the entire field! 
You’re absolutely right, there are good discord papers. I can even name about 5 of them, and I’m willing to bet there are as many as 12 or 15 total. My intention is definitely not to trash the subject as intrinsically uninteresting; rather, I want to highlight the epidemic of pointless papers that constitute the discord bubble. I hope that thinning the herd will increase the quality of the results in the field and decrease the hype surrounding it, because it has really gotten completely out of control.
Here are some good rules of thumb for those moments when you find yourself writing a discord paper. If you are calculating something and you don’t know why you are calculating it, then close your latex editor. You do not have one of the good discord papers. If the discord you calculated is not related to a resource (physical, computational, etc.) in a quantifiable way, you probably don’t have a good discord paper. If it is related to a resource, but you had to concoct that relationship in a totally ad hoc way that doesn’t generalize, then you do not have a good discord paper. Ditto if the relationship is via a protocol that literally nobody cares about. If you only have two qubits, you almost certainly don’t have a good discord paper. Good theory papers usually have n qubits. And if your weak theory result suffers from one or more of the above but you add in some equally unimpressive experimental results to cover up that fact, then you absolutely, unequivocally do not have a good discord paper. I don’t care what journal it’s published in, it is not a good paper and you should be ashamed of yourself for inflating the bubble even further.
As bad as the authors are, this bubble is also the fault of the referees. Simply being correct is not enough to warrant publication. It also has to be new, non-trivial, and interesting. Please referees, “Just Say No” to papers that don’t meet this standard!
Enough of this bad medicine. In the comments, feel free to mention some of the actually good discord papers, and why they deserve to keep their value after the bubble bursts.

Science Code Manifesto

Recently, one of the students here at U. Sydney and I had the frustrating experience of trying to reproduce a numerical result from a paper, but it just wasn’t working. The code used by the authors was regrettably not made publicly available, so once we were fairly sure that our code was correct, we didn’t know how to resolve the discrepancy. Luckily, in our small community, I knew the authors personally and we were able to figure out why the results didn’t match up. But as code becomes a larger and larger part of scientific projects, these sorts of problems will increase in frequency and severity.
What can we do about it?
A team of very smart computer scientists have come together and written the science code manifesto. It is short and sweet; the whole thing boils down to five simple principles of publishing code:

Code
All source code written specifically to process data for a published paper must be available to the reviewers and readers of the paper.
Copyright
The copyright ownership and license of any released source code must be clearly stated.
Citation
Researchers who use or adapt science source code in their research must credit the code’s creators in resulting publications.
Credit
Software contributions must be included in systems of scientific assessment, credit, and recognition.
Curation
Source code must remain available, linked to related materials, for the useful lifetime of the publication.

If you support this, and you want to help contribute to the solution, then please go and endorse the manifesto. Even more importantly, practice the five C’s the next time you publish a paper!

Plagiarism horror story

Halloween is my favorite holiday: you aren’t strong armed into unhealthy levels of conspicuous consumption, the costumes and pumpkins are creative and fun, the autumn colors are fantastic, and the weather is typically quite pleasant (or at least it was in pre-climate change/hurricane Sandy days.) You don’t even have to travel at all! So in honor of Halloween, I’m going to tell a (true) horror story about…

Back in August, I was asked to referee a paper for a certain prestigious physics journal. The paper had already been reviewed by two referees, and while one referee was fairly clear that the paper should not be published, the other gave a rather weak rejection. The authors replied seeking the opinion of a third referee, and that’s when the editors contacted me.
I immediately noticed that something was amiss: the title of the paper was nearly identical to a paper that my co-authors and I had published in that same journal a couple of years earlier. In fact, out of 12 words in the title, the first 9 were taken verbatim. I’m sorry to say, but it further raised my hackles that the authors and their universities were unknown to me and came from a country with a reputation for rampant plagiarism. Proceeding to the abstract, I found that the authors had copied entire sentences, merely substituting some of the nouns and verbs as if it were a Mad Lib. Scrolling further, the authors copied an entire theorem, taking the equations in the proof symbol-by-symbol and line-by-line!
I told all of this to the editor and he of course rejected the paper, providing also an explanation of why and what constitutes plagiarism. A strange twist is that my original paper was actually cited by the copy. Perhaps the authors thought that if they cited my paper, then the copying wasn’t plagiarism? They had even mentioned my paper directly in their response to the original reports as supporting evidence that their paper should be published. (“You published this other paper which is nearly identical, so why wouldn’t you publish ours?”) Thus, at this point I was thinking that it’s possible they simply didn’t understand that their actions constituted plagiarism, and I was grateful that the editor had enlightened them.
Fast forward to today.
I receive another email from a different journal asking to referee a paper… the same paper. They had changed the title, but the abstract and copied theorem were still there. Bizarrely, they even added a fourth author. The zombie paper is back, and it wants to be published!
Of course, I can also raise my previous objections, and re-kill this zombie paper. And I’m considering directly contacting the authors. This clearly isn’t a scalable strategy, however.
It got me thinking. Is there a better way to combat plagiarism of academic papers? One thing that often works in changing people’s behavior is shame. My idea is, perhaps if we build a website where we publicly post the names and affiliations of offenders, then this will cause enough embarrassment to stem the tide. Sort of like the P vs. NP site for erroneous proofs.
What’s your best idea for how to deal with this problem?

Greg Kuperberg: a paladin fighting against the ogres of hype

Nothing much to add here, but Greg Kuperberg has an excellent article at Slate which clarifies the power and limitations of quantum computers. The article is brief, accessible, and highly accurate. The next time a science journalist contacts you for a story, be sure to pass on a copy of this article as an exemplar of accurate, non-technical descriptions of quantum computing.

Is science on trial in Italy?

credit: Reuters/Alessandro Bianchi

Big news from Italy today, where a regional court has ruled that six Italian scientists (and one ex-government official) are guilty of multiple manslaughter for the deaths of 309 people that were killed in the L’Aquila earthquake in 2009.
The reaction in the English-speaking press seems largely to showcase the angle that the scientists are being persecuted for failing to accurately predict when the earthquake would hit. They are rightly pointing out that there is no currently accepted scientific method for short-term earthquake prediction, and hence there can be no way to fault the scientists for a failure to make an accurate prediction. As the BBC puts it, “The case has alarmed many in the scientific community, who feel science itself has been put on trial.”
And indeed, reading through the technical report of the “grandi rischi” commission, there does not seem to be anything unreasonable that these scientists say, either before or after the earthquake. (Unfortunately the reports are only in Italian… ma non è troppo difficile perché questo aiuta.) There is no evidence here of either misconduct or manipulation of data.
However, this is a rather delicate issue, and the above arguments in defense of the scientists may be red herrings. As BBC science correspondent Jonathan Amos reports, the issue which was under deliberation at the trial was rather about whether the scientists (under pressure from the Civil Defense) issued public statements that were overly downplaying the risk. In fact, one official, Guido Bertolaso, was recorded in a tapped telephone conversation explicitly calling for such action, and I’m sure that charges will be brought against him as well, if they haven’t already. (Strangely, the wiretap was part of a separate investigation and went unnoticed until January of this year, hence the delay.)
In fact, after the aforementioned conversation with Mr. Bertolaso, one of the seven defendants, Mr. de Bernardinis (the ex-official, not one of the scientists) told a reporter that there was “no danger” posed by the ongoing tremors, and that “the scientific community continues to confirm to me that in fact it is a favorable situation” and that the public should just “relax with a Montepulciano” (a glass of red wine from the region).  Contrast this with the fact that strong earthquakes do tend to correlate time-wise with an increase in smaller tremors. Thus, although the total probability of a large event remains low, it definitely increases when there are more tremors.
Thus, the case is not just another in the long Italian tradition of show-trials persecuting scientists (c.f. Bruno, Galileo). It is at the very least a complex and delicate case, and we should resist the knee-jerk reaction to rush to the defense of our fellow scientists without first getting all of the facts. My personal opinion is that I’m reserving judgement on the guilt or innocence of the scientists until I get more information, though Mr. de Bernardinis is not looking so good.
(Update: as Aram rightly points out in the comments, a manslaughter charge seems very excessive here, and I suppose charges of negligence or maybe wrongful death would seem more appropriate.)
But there is at least one other tragedy here, and that is that these scientists might be essentially the only ones who face a trial. There are many other failure points in the chain of responsibility that led to the tragic deaths. For example, it has come to light that many of the buildings were not built according to earthquake safety regulations; the contractors and government officials were cutting corners in very dangerous ways. If those accusations are true, then that is very serious indeed, and it would be a travesty of justice if the guilty parties were to go unpunished.
Update: Michael Nielsen points to an outstanding article that I missed (from over a month ago!) that discusses exactly these points. Let me quote extensively from the article:

Picuti [one of the prosecutors] made it clear that the scientists are not accused of failing to predict the earthquake. “Even six-year old kids know that earthquakes cannot be predicted,” he said. “The goal of the meeting was very different: the scientists were supposed to evaluate whether the seismic sequence could be considered a precursor event, to assess what damages had already happened at that point, to discuss how to mitigate risks.” Picuti said the panel members did not fulfill these commitments, and that their risk analysis was “flawed, inadequate, negligent and deceptive”, resulting in wrong information being given to citizens.
Picuti also rejected the point – made by the scientists’ lawyers – that De Bernardinis alone should be held responsible for what he told the press. He said that the seismologists failed to give De Bernardinis essential information about earthquake risk. For example, he noted that in 1995 one of the indicted scientists… had published a study that suggetsed a magnitude-5.9 earthquake in the L’Aquila area was considered highly probable within 20 years… [and] estimated the probability of a magnitude 5.5 shock in the following decade to be as high as 15%. Such data were not discussed at the meeting, as the minutes show.
“Had Civil Protection officials known this, they would probably have acted differently,” said Picuti. “They were victims of the seismologists”.

Sean Barrett


I am very sad to learn that Sean Barrett of Imperial College London, who made important contributions to fault tolerance and optical quantum computing, among other areas, was tragically killed in a traffic accident in Perth on Friday. He was just 36 years old.
Sean had a gift for working at the boundary where theory meets experiment, and many of his theoretical contributions over the past decade have led to experimental progress. Sean’s presence was felt especially strongly here in Australia, where he spent several years as a researcher. He was not only a valued member of our research community but was also a personal friend to many of us, and he will be sorely missed.

12 things a quantum information theorist should do at least once

By popular demand

  1. prove (or disprove) something by going to the Church of the Larger Hilbert Space
  2. apply amplitude amplification in a non-trivial way
  3. convince yourself you’ve proven that NP is contained in BQP, or at least that you have a poly-time quantum algorithm for graph isomorphism or dihedral HSP
  4. upper- or lower-bound a fault-tolerance threshold
  5. use the stabilizer formalism
  6. make use of convexity
  7. pick a random state or unitary from the Haar measure
  8. use an entropic quantity
  9. estimate or compute a spectral gap
  10. impress people in field X with your knowledge of something that everyone in field Y takes for granted, where X and Y are chosen from {CS, physics, some area of math, etc.}.
  11. confuse people in field X about the point of what you’re doing, when it’s a common goal in field Y.
  12. have a paper unjustly rejected or accepted by PRL.

Thanks to Ashley Montanaro for suggesting the first three.

Down Under

I have just moved to the University of Sydney to begin a permanent position here in the Department of Physics. I had a great time at the University of Washington, and I’ll miss working with the fantastic people there. I am looking forward, however, to contributing to the growth of an increasingly strong quantum group here, together with my new colleagues.
Wish me luck!
Also, a bit of general advice. If you want to submit things to QIP, it is generally not a good idea to schedule an international move for the same week as the submission deadline. 🙂
Finally, here are some photos to make you all jealous and to encourage you to visit.

Uncertain on Uncertainty

Over at BBC News, there is an article about a recently published paper (arXiv) by Lee Rozema et al. that could lead to some, ehm, uncertainty about the status of the Heisenberg Uncertainty Principle (HUP).
Before dissecting the BBC article, let’s look at the paper by Rozema et al. The title is “Violation of Heisenberg’s Measurement–Disturbance Relationship by Weak Measurements”. While this title might raise a few eyebrows, the authors make it crystal clear in the opening sentence of the abstract that they didn’t disprove the HUP or some such nonsense. The HUP is a theorem within the standard formulation of quantum mechanics, so finding a violation of that would be equivalent to finding a violation of quantum theory itself! Instead, they look at the so-called measurement–disturbance relationship (MDR), which is a non-rigorous heuristic that is commonly taught to give an intuition for the uncertainty principle.
The HUP is usually stated in the form of the Robertson uncertainty relation, and states that a given quantum state psi cannot (in general) have zero variance with respect to two non-commuting observables. The more modern formulations are stated in a why that is independent of the quantum state; see this nice review by Wehner and Winter for more about these entropic uncertainty relations.
By contrast, the MDR states that the product of the measurement precision and the measurement disturbance (quantified as root-mean-squared deviations between ideal and actual measurement variables) can’t be smaller than Planck’s constant. In 2002, Masanao Ozawa proved that this was inconsistent with standard quantum mechanics, and formulated a corrected version of the MDR that also takes into account the state-dependent variance of the observables. Building on Ozawa’s work, in 2010 Lund and Wiseman proposed an experiment which could measure the relevant quantities using  the so-called weak value.
Rozema et al. implemented the Lund-Wiseman scheme using measurements of complementary observables (X and Z) on the polarization states of a single photon to confirm Ozawa’s result, and to experimentally violate the MDR.  The experiment is very cool, since it crucially relies on entanglement induced between the probe photon and the measurement apparatus.
The bottom line: the uncertainty principle emerges completely unscathed, but the original hand-wavy MDR succumbs to both theoretical and now experimental violations.
Now let’s look at the BBC article. Right from the title and the subtitle, they get it wrong. “Heisenberg uncertainty principle stressed in new test”—no, that’s wrong—“Pioneering experiments have cast doubt on a founding idea…”—also no—the results were consistent with the HUP, and actually corroborated Ozawa’s theory of measurement–disturbance! Then they go on to say that this “could play havoc with ‘uncrackable codes’ of quantum cryptography.” The rest of the article has a few more whoppers, but also some mildly redeeming features; after such a horrible start, though, you might as well quietly leave the pitch. Please science journalists, try to do better next time.

Taken to School

Here is a fine piece of investigative journalism about a very wide spread scam that is plaguing academia. Definitely worth a watch.