Nothing much to add here, but Greg Kuperberg has an excellent article at Slate which clarifies the power and limitations of quantum computers. The article is brief, accessible, and highly accurate. The next time a science journalist contacts you for a story, be sure to pass on a copy of this article as an exemplar of accurate, non-technical descriptions of quantum computing.
Is science on trial in Italy?
Big news from Italy today, where a regional court has ruled that six Italian scientists (and one ex-government official) are guilty of multiple manslaughter for the deaths of 309 people that were killed in the L’Aquila earthquake in 2009.
The reaction in the English-speaking press seems largely to showcase the angle that the scientists are being persecuted for failing to accurately predict when the earthquake would hit. They are rightly pointing out that there is no currently accepted scientific method for short-term earthquake prediction, and hence there can be no way to fault the scientists for a failure to make an accurate prediction. As the BBC puts it, “The case has alarmed many in the scientific community, who feel science itself has been put on trial.”
And indeed, reading through the technical report of the “grandi rischi” commission, there does not seem to be anything unreasonable that these scientists say, either before or after the earthquake. (Unfortunately the reports are only in Italian… ma non è troppo difficile perché questo aiuta.) There is no evidence here of either misconduct or manipulation of data.
However, this is a rather delicate issue, and the above arguments in defense of the scientists may be red herrings. As BBC science correspondent Jonathan Amos reports, the issue which was under deliberation at the trial was rather about whether the scientists (under pressure from the Civil Defense) issued public statements that were overly downplaying the risk. In fact, one official, Guido Bertolaso, was recorded in a tapped telephone conversation explicitly calling for such action, and I’m sure that charges will be brought against him as well, if they haven’t already. (Strangely, the wiretap was part of a separate investigation and went unnoticed until January of this year, hence the delay.)
In fact, after the aforementioned conversation with Mr. Bertolaso, one of the seven defendants, Mr. de Bernardinis (the ex-official, not one of the scientists) told a reporter that there was “no danger” posed by the ongoing tremors, and that “the scientific community continues to confirm to me that in fact it is a favorable situation” and that the public should just “relax with a Montepulciano” (a glass of red wine from the region). Contrast this with the fact that strong earthquakes do tend to correlate time-wise with an increase in smaller tremors. Thus, although the total probability of a large event remains low, it definitely increases when there are more tremors.
Thus, the case is not just another in the long Italian tradition of show-trials persecuting scientists (c.f. Bruno, Galileo). It is at the very least a complex and delicate case, and we should resist the knee-jerk reaction to rush to the defense of our fellow scientists without first getting all of the facts. My personal opinion is that I’m reserving judgement on the guilt or innocence of the scientists until I get more information, though Mr. de Bernardinis is not looking so good.
(Update: as Aram rightly points out in the comments, a manslaughter charge seems very excessive here, and I suppose charges of negligence or maybe wrongful death would seem more appropriate.)
But there is at least one other tragedy here, and that is that these scientists might be essentially the only ones who face a trial. There are many other failure points in the chain of responsibility that led to the tragic deaths. For example, it has come to light that many of the buildings were not built according to earthquake safety regulations; the contractors and government officials were cutting corners in very dangerous ways. If those accusations are true, then that is very serious indeed, and it would be a travesty of justice if the guilty parties were to go unpunished.
Update: Michael Nielsen points to an outstanding article that I missed (from over a month ago!) that discusses exactly these points. Let me quote extensively from the article:
Picuti [one of the prosecutors] made it clear that the scientists are not accused of failing to predict the earthquake. “Even six-year old kids know that earthquakes cannot be predicted,” he said. “The goal of the meeting was very different: the scientists were supposed to evaluate whether the seismic sequence could be considered a precursor event, to assess what damages had already happened at that point, to discuss how to mitigate risks.” Picuti said the panel members did not fulfill these commitments, and that their risk analysis was “flawed, inadequate, negligent and deceptive”, resulting in wrong information being given to citizens.
Picuti also rejected the point – made by the scientists’ lawyers – that De Bernardinis alone should be held responsible for what he told the press. He said that the seismologists failed to give De Bernardinis essential information about earthquake risk. For example, he noted that in 1995 one of the indicted scientists… had published a study that suggetsed a magnitude-5.9 earthquake in the L’Aquila area was considered highly probable within 20 years… [and] estimated the probability of a magnitude 5.5 shock in the following decade to be as high as 15%. Such data were not discussed at the meeting, as the minutes show.
“Had Civil Protection officials known this, they would probably have acted differently,” said Picuti. “They were victims of the seismologists”.
Sean Barrett
I am very sad to learn that Sean Barrett of Imperial College London, who made important contributions to fault tolerance and optical quantum computing, among other areas, was tragically killed in a traffic accident in Perth on Friday. He was just 36 years old.
Sean had a gift for working at the boundary where theory meets experiment, and many of his theoretical contributions over the past decade have led to experimental progress. Sean’s presence was felt especially strongly here in Australia, where he spent several years as a researcher. He was not only a valued member of our research community but was also a personal friend to many of us, and he will be sorely missed.
Haroche and Wineland win Physics Nobel
The physics prize was shared between experimentalists Serge Haroche and David Wineland, longtime leaders in the study of atom-photon interaction. In recent decades both have honed their techniques to meet the challenges and opportunities opened by “quantum information science” which aims to rebuild the theory and practice of communication and computation on quantum foundations. This change of viewpoint was led by theorists, beginning with John Bell, and was initially regarded skeptically not only by information theorists and computer scientists, on whose turf it encroached, but even by many physicists, who saw a lot of theorizing, verging on philosophy, with little practice to back it up. Haroche, working often with Rydberg atoms and microwave cavities, and Wineland, with trapped ions and optical fields, took the new approach seriously, and over many years have provided much of the solid foundation of practice that has by now has earned the field the right to be taken seriously. At the same time both researchers have done their part to restrain the inevitable hype. A decade and a half ago Haroche, in articles like “Quantum Computing: Dream or Nightmare” pointed out how difficult building a quantum computer would be, while always believing it possible in principle, and in the mean time produced, with his group, an impressive stream of experimental results and technical improvements that made it ever more practical. In the same vein, Wineland, when asked if ion traps were the right hardware for building a quantum computer, answered that whatever advantage they had was like being 10 feet ahead at the start of a 10 mile race. Then like Haroche he went ahead making steady progress in the control and measurement of individual particles, with applications quite apart from that distant goal.
Both men are consummate experimentalists, finding and adapting whatever it takes. I visited Wineland’s lab about a decade ago and noticed a common dishwashing glove (right handed and light blue, as I recall) interposed between the ion trap’s optical window and a CCD camera focused the ions within. I asked David what its function was among all the more professional looking equipment. He said this particular brand of gloves happened to be quite opaque with a matte black inside as good as anything he could get from an optics catalog, meanwhile combining moderate flexibility with sufficient rigidity to stay out of the way of the light path, unlike, say, a piece of black velvet. Indeed the cut-off thumb fitted nicely onto the optical window, and the wrist was snugly belted around the front of the camera, leaving the fingers harmlessly but ludicrously poking out at the side. The physics Nobel has occasioned a lot of press coverage, much of it quite good in conveying the excitement of quantum information science, while restraining unrealistic expectations. We especially like Jason Palmer’s story from earlier this year which the BBC resurrected to explain a field which this Nobel has suddenly thrust into the limelight. We congratulate Haroche and Wineland as deserving and timely winners of this first Nobel given to people who could fairly be described, and would now describe themselves, as quantum information scientists.
12 things a quantum information theorist should do at least once
By popular demand…
- prove (or disprove) something by going to the Church of the Larger Hilbert Space
- apply amplitude amplification in a non-trivial way
- convince yourself you’ve proven that NP is contained in BQP, or at least that you have a poly-time quantum algorithm for graph isomorphism or dihedral HSP
- upper- or lower-bound a fault-tolerance threshold
- use the stabilizer formalism
- make use of convexity
- pick a random state or unitary from the Haar measure
- use an entropic quantity
- estimate or compute a spectral gap
- impress people in field X with your knowledge of something that everyone in field Y takes for granted, where X and Y are chosen from {CS, physics, some area of math, etc.}.
- confuse people in field X about the point of what you’re doing, when it’s a common goal in field Y.
- have a paper unjustly rejected or accepted by PRL.
Thanks to Ashley Montanaro for suggesting the first three.
A way around Nobel's 3-person limit
Most fields of science have become increasingly collaborative over the last century, sometimes forcing the Nobel Prizes to unduly truncate the list of recipients, or neglect major discoveries involving more than three discoverers. In January we pointed out a possible escape from this predicament: choose three official laureates at random from a larger list, then publish the entire list, along with the fact that the official winners had been chosen randomly from it. The money of course would go to the three official winners, but public awareness that they were no more worthy than the others might induce them to share it. A further refinement would be to use (and perhaps publish) weighted probabilities, allowing credit to be allocated unequally. If the Nobel Foundation’s lawyers could successfully argue that such randomization was consistent with Nobel’s will, the Prizes would better reflect the collaborative nature of modern science, at the same time lessening unproductive competition among scientists to make it into the top three.
Throwing cold water on the Quantum Internet
There has been a lot of loose talk lately about a coming “Quantum Internet”. I was asked about it recently by a journalist and gave him this curmudgeonly answer, hoping to redirect some of the naive enthusiasm:
…First let me remark that “quantum internet” has become a bit of a buzzword that can lead to an inaccurate idea of the likely role of quantum information in a future global information infrastructure. Although quantum concepts such as qubit and entanglement have revolutionized our understanding of the nature of information, I believe that the Internet will remain almost entirely classical, with quantum communication and computation being used for a few special purposes, where the unique capabilities of quantum information are needed. For other tasks, where coherent control of the quantum state is not needed, classical processing will suffice and will remain cheaper, faster, and more reliable for the foreseeable future. Of course there is a chance that quantum computers and communications links may some day become so easy to build that they are widely used for general purpose computing and communication, but I think it highly unlikely.
Would the quantum internet replace the classical one, or would the two somehow coexist?
As remarked above, I think the two would coexist, with the Internet remaining mostly classical. Quantum communication will be used for special purposes, like sharing cryptographic keys, and quantum computing will be used in those few situations where it gives a significant speed advantage (factoring large numbers, some kinds of search, and the simulation of quantum systems), or for the processing of inherently quantum signals (say from a physics experiment).
Would quantum search engines require qubits transmitted between the user’s computer and the web searcher’s host? Or would they simply use a quantum computer performing the search on the host machine, which could then return its findings classically?
It’s not clear that quantum techniques would help search engines, either in transmitting the data to the search engine, or in performing the search itself. Grover’s algorithm (where coherent quantum searching gives a quadratic speedup over classical searching) is less applicable to the large databases on which search engines operate, than to problems like the traveling salesman problem, where the search takes place not over a physical database, but over an exponentially large space of virtual possibilities determined by a small amount of physical data.
On the other hand, quantum techniques could play an important supporting role, not only for search engines but other Internet applications, by helping authenticate and encrypt classical communications, thereby making the Internet more secure. And as I said earlier, dedicated quantum computers could be used for certain classically-hard problems like factoring, searches over virtual spaces, simulating quantum systems, and processing quantum data.
When we talk about quantum channels do we mean a quantum communication link down which qubits can be sent and which prevents them decohering, or are these channels always an entangled link? …
A quantum channel of the sort you describe is needed, both to transmit quantum signals and to share entanglement. After entanglement has been shared, if one has a quantum memory, it can be stored and used later in combination with a classical channel to transmit qubits. This technique is called quantum teleportation (despite this name, for which I am to blame, quantum teleportation cannot be used for transporting material objects).
But could we ever hope for quantum communication in which no wires are needed – but entanglement handles everything?
The most common misconception about entanglement is that it can be used to communicate—transmit information from a sender to a receiver—perhaps even instantaneously. In fact it cannot communicate at all, except when assisted by a classical or quantum channel, neither of which communicate faster than the speed of light. So a future Internet will need wires, radio links, optical fibers, or other kinds of communications links, mostly classical, but also including a few quantum channels.
How soon before the quantum internet could arrive?
I don’t think there will ever be an all-quantum or mostly-quantum internet. Quantum cryptographic systems are already in use in a few places, and I think can fairly be said to have proven potential for improving cybersecurity. Within a few decades I think there will be practical large-scale quantum computers, which will be used to solve some problems intractable on any present or foreseeable classical computer, but they will not replace classical computers for most problems. I think the Internet as a whole will continue to consist mostly of classical computers, communications links, and data storage devices.
Given that the existing classical Internet is not going away, what sort of global quantum infrastructure can we expect, and what would it be used for? Quantum cryptographic key distribution, the most mature quantum information application, is already deployed over short distances today (typically < 100 km). Planned experiments between ground stations and satellites in low earth orbit promise to increase this range several fold. The next and more important stage, which depends on further progress in quantum memory and error correction, will probably be the development of a network of quantum repeaters, allowing entanglement to be generated between any two nodes in the network, and, more importantly, stockpiled and stored until needed. Aside from its benefits for cybersecurity (allowing quantum-generated cryptographic keys to be shared between any two nodes without having to trust the intermediate nodes) such a globe-spanning quantum repeater network will have important scientific applications, for example allowing coherent quantum measurements to be made on astronomical signals over intercontinental distances. Still later, one can expect full scale quantum computers to be developed and attached to the repeater network. We would then finally have achieved a capacity for fully general processing of quantum information, both locally and globally—an expensive, low-bandwidth quantum internet if you will—to be used in conjunction with the cheap high-bandwidth classical Internet when the unique capabilities of quantum information processing are needed.
Down Under
I have just moved to the University of Sydney to begin a permanent position here in the Department of Physics. I had a great time at the University of Washington, and I’ll miss working with the fantastic people there. I am looking forward, however, to contributing to the growth of an increasingly strong quantum group here, together with my new colleagues.
Wish me luck!
Also, a bit of general advice. If you want to submit things to QIP, it is generally not a good idea to schedule an international move for the same week as the submission deadline. 🙂
Finally, here are some photos to make you all jealous and to encourage you to visit.
Uncertain on Uncertainty
Over at BBC News, there is an article about a recently published paper (arXiv) by Lee Rozema et al. that could lead to some, ehm, uncertainty about the status of the Heisenberg Uncertainty Principle (HUP).
Before dissecting the BBC article, let’s look at the paper by Rozema et al. The title is “Violation of Heisenberg’s Measurement–Disturbance Relationship by Weak Measurements”. While this title might raise a few eyebrows, the authors make it crystal clear in the opening sentence of the abstract that they didn’t disprove the HUP or some such nonsense. The HUP is a theorem within the standard formulation of quantum mechanics, so finding a violation of that would be equivalent to finding a violation of quantum theory itself! Instead, they look at the so-called measurement–disturbance relationship (MDR), which is a non-rigorous heuristic that is commonly taught to give an intuition for the uncertainty principle.
The HUP is usually stated in the form of the Robertson uncertainty relation, and states that a given quantum state $latex psi$ cannot (in general) have zero variance with respect to two non-commuting observables. The more modern formulations are stated in a why that is independent of the quantum state; see this nice review by Wehner and Winter for more about these entropic uncertainty relations.
By contrast, the MDR states that the product of the measurement precision and the measurement disturbance (quantified as root-mean-squared deviations between ideal and actual measurement variables) can’t be smaller than Planck’s constant. In 2002, Masanao Ozawa proved that this was inconsistent with standard quantum mechanics, and formulated a corrected version of the MDR that also takes into account the state-dependent variance of the observables. Building on Ozawa’s work, in 2010 Lund and Wiseman proposed an experiment which could measure the relevant quantities using the so-called weak value.
Rozema et al. implemented the Lund-Wiseman scheme using measurements of complementary observables ($latex X$ and $latex Z$) on the polarization states of a single photon to confirm Ozawa’s result, and to experimentally violate the MDR. The experiment is very cool, since it crucially relies on entanglement induced between the probe photon and the measurement apparatus.
The bottom line: the uncertainty principle emerges completely unscathed, but the original hand-wavy MDR succumbs to both theoretical and now experimental violations.
Now let’s look at the BBC article. Right from the title and the subtitle, they get it wrong. “Heisenberg uncertainty principle stressed in new test”—no, that’s wrong—“Pioneering experiments have cast doubt on a founding idea…”—also no—the results were consistent with the HUP, and actually corroborated Ozawa’s theory of measurement–disturbance! Then they go on to say that this “could play havoc with ‘uncrackable codes’ of quantum cryptography.” The rest of the article has a few more whoppers, but also some mildly redeeming features; after such a horrible start, though, you might as well quietly leave the pitch. Please science journalists, try to do better next time.
Taken to School
Here is a fine piece of investigative journalism about a very wide spread scam that is plaguing academia. Definitely worth a watch.