US biologist Randy Schekman, who shared this year’s physiology and medicine Nobel prize, has made prompt use of his new bully pulpit. In
How journals like Nature, Cell and Science are damaging science: The incentives offered by top journals distort science, just as big bonuses distort banking
he singled out these “luxury” journals as a particularly harmful part of the current milieu in which “the biggest rewards follow the flashiest work, not the best,” and he vowed no longer to publish in them. An accompanying Guardian article includes defensive quotes from representatives of Science and Nature, especially in response to Schekman’s assertions that the journals favor controversial articles over boring but scientifically more important ones like replication studies, and that they deliberately seek to boost their impact factors by restricting the number of articles published, “like fashion designers who create limited-edition handbags or suits.” Focusing on journals, his main concrete suggestion is to increase the role of open-access online journals like his elife, supported by philanthropic foundations rather than subscriptions. But Schekman acknowledges that blame extends to funding organizations and universities, which use publication in high-impact-factor journals as a flawed proxy for quality, and to scientists who succumb to the perverse incentives to put career advancement ahead of good science. Similar points were made last year in Serge Haroche’s thoughtful piece on why it’s harder to do good science now than in his youth. This, and Nature‘s recent story on Brazilian journals’ manipulation of impact factor statistics, illustrate how prestige journals are part of the solution as well as the problem.
Weary of people and institutions competing for the moral high ground in a complex terrain, I sought a less value-laden approach, in which scientists, universities, and journals would be viewed merely as interacting IGUSes (information gathering and utilizing systems), operating with incomplete information about one another. In such an environment, reliance on proxies is inevitable, and the evolution of false advertising is a phenomenon to be studied rather than disparaged. A review article on biological mimicry introduced me to some of the refreshingly blunt standard terminology of that field. Mimicry, it said, involves three roles: a model, i.e., a living or material agent emitting perceptible signals, a mimic that plagiarizes the model, and a dupe whose senses are receptive to the model’s signal and which is thus deceived by the mimic’s similar signals. As in human affairs, it is not uncommon for a single player to perform several of these roles simultaneously.
Throwing cold water on the Quantum Internet
There has been a lot of loose talk lately about a coming “Quantum Internet”. I was asked about it recently by a journalist and gave him this curmudgeonly answer, hoping to redirect some of the naive enthusiasm:
…First let me remark that “quantum internet” has become a bit of a buzzword that can lead to an inaccurate idea of the likely role of quantum information in a future global information infrastructure. Although quantum concepts such as qubit and entanglement have revolutionized our understanding of the nature of information, I believe that the Internet will remain almost entirely classical, with quantum communication and computation being used for a few special purposes, where the unique capabilities of quantum information are needed. For other tasks, where coherent control of the quantum state is not needed, classical processing will suffice and will remain cheaper, faster, and more reliable for the foreseeable future. Of course there is a chance that quantum computers and communications links may some day become so easy to build that they are widely used for general purpose computing and communication, but I think it highly unlikely.
Would the quantum internet replace the classical one, or would the two somehow coexist?
As remarked above, I think the two would coexist, with the Internet remaining mostly classical. Quantum communication will be used for special purposes, like sharing cryptographic keys, and quantum computing will be used in those few situations where it gives a significant speed advantage (factoring large numbers, some kinds of search, and the simulation of quantum systems), or for the processing of inherently quantum signals (say from a physics experiment).
Would quantum search engines require qubits transmitted between the user’s computer and the web searcher’s host? Or would they simply use a quantum computer performing the search on the host machine, which could then return its findings classically?
It’s not clear that quantum techniques would help search engines, either in transmitting the data to the search engine, or in performing the search itself. Grover’s algorithm (where coherent quantum searching gives a quadratic speedup over classical searching) is less applicable to the large databases on which search engines operate, than to problems like the traveling salesman problem, where the search takes place not over a physical database, but over an exponentially large space of virtual possibilities determined by a small amount of physical data.
On the other hand, quantum techniques could play an important supporting role, not only for search engines but other Internet applications, by helping authenticate and encrypt classical communications, thereby making the Internet more secure. And as I said earlier, dedicated quantum computers could be used for certain classically-hard problems like factoring, searches over virtual spaces, simulating quantum systems, and processing quantum data.
When we talk about quantum channels do we mean a quantum communication link down which qubits can be sent and which prevents them decohering, or are these channels always an entangled link? …
A quantum channel of the sort you describe is needed, both to transmit quantum signals and to share entanglement. After entanglement has been shared, if one has a quantum memory, it can be stored and used later in combination with a classical channel to transmit qubits. This technique is called quantum teleportation (despite this name, for which I am to blame, quantum teleportation cannot be used for transporting material objects).
But could we ever hope for quantum communication in which no wires are needed – but entanglement handles everything?
The most common misconception about entanglement is that it can be used to communicate—transmit information from a sender to a receiver—perhaps even instantaneously. In fact it cannot communicate at all, except when assisted by a classical or quantum channel, neither of which communicate faster than the speed of light. So a future Internet will need wires, radio links, optical fibers, or other kinds of communications links, mostly classical, but also including a few quantum channels.
How soon before the quantum internet could arrive?
I don’t think there will ever be an all-quantum or mostly-quantum internet. Quantum cryptographic systems are already in use in a few places, and I think can fairly be said to have proven potential for improving cybersecurity. Within a few decades I think there will be practical large-scale quantum computers, which will be used to solve some problems intractable on any present or foreseeable classical computer, but they will not replace classical computers for most problems. I think the Internet as a whole will continue to consist mostly of classical computers, communications links, and data storage devices.
Given that the existing classical Internet is not going away, what sort of global quantum infrastructure can we expect, and what would it be used for? Quantum cryptographic key distribution, the most mature quantum information application, is already deployed over short distances today (typically < 100 km). Planned experiments between ground stations and satellites in low earth orbit promise to increase this range several fold. The next and more important stage, which depends on further progress in quantum memory and error correction, will probably be the development of a network of quantum repeaters, allowing entanglement to be generated between any two nodes in the network, and, more importantly, stockpiled and stored until needed. Aside from its benefits for cybersecurity (allowing quantum-generated cryptographic keys to be shared between any two nodes without having to trust the intermediate nodes) such a globe-spanning quantum repeater network will have important scientific applications, for example allowing coherent quantum measurements to be made on astronomical signals over intercontinental distances. Still later, one can expect full scale quantum computers to be developed and attached to the repeater network. We would then finally have achieved a capacity for fully general processing of quantum information, both locally and globally—an expensive, low-bandwidth quantum internet if you will—to be used in conjunction with the cheap high-bandwidth classical Internet when the unique capabilities of quantum information processing are needed.
Uncertain on Uncertainty
Over at BBC News, there is an article about a recently published paper (arXiv) by Lee Rozema et al. that could lead to some, ehm, uncertainty about the status of the Heisenberg Uncertainty Principle (HUP).
Before dissecting the BBC article, let’s look at the paper by Rozema et al. The title is “Violation of Heisenberg’s Measurement–Disturbance Relationship by Weak Measurements”. While this title might raise a few eyebrows, the authors make it crystal clear in the opening sentence of the abstract that they didn’t disprove the HUP or some such nonsense. The HUP is a theorem within the standard formulation of quantum mechanics, so finding a violation of that would be equivalent to finding a violation of quantum theory itself! Instead, they look at the so-called measurement–disturbance relationship (MDR), which is a non-rigorous heuristic that is commonly taught to give an intuition for the uncertainty principle.
The HUP is usually stated in the form of the Robertson uncertainty relation, and states that a given quantum state $latex psi$ cannot (in general) have zero variance with respect to two non-commuting observables. The more modern formulations are stated in a why that is independent of the quantum state; see this nice review by Wehner and Winter for more about these entropic uncertainty relations.
By contrast, the MDR states that the product of the measurement precision and the measurement disturbance (quantified as root-mean-squared deviations between ideal and actual measurement variables) can’t be smaller than Planck’s constant. In 2002, Masanao Ozawa proved that this was inconsistent with standard quantum mechanics, and formulated a corrected version of the MDR that also takes into account the state-dependent variance of the observables. Building on Ozawa’s work, in 2010 Lund and Wiseman proposed an experiment which could measure the relevant quantities using the so-called weak value.
Rozema et al. implemented the Lund-Wiseman scheme using measurements of complementary observables ($latex X$ and $latex Z$) on the polarization states of a single photon to confirm Ozawa’s result, and to experimentally violate the MDR. The experiment is very cool, since it crucially relies on entanglement induced between the probe photon and the measurement apparatus.
The bottom line: the uncertainty principle emerges completely unscathed, but the original hand-wavy MDR succumbs to both theoretical and now experimental violations.
Now let’s look at the BBC article. Right from the title and the subtitle, they get it wrong. “Heisenberg uncertainty principle stressed in new test”—no, that’s wrong—“Pioneering experiments have cast doubt on a founding idea…”—also no—the results were consistent with the HUP, and actually corroborated Ozawa’s theory of measurement–disturbance! Then they go on to say that this “could play havoc with ‘uncrackable codes’ of quantum cryptography.” The rest of the article has a few more whoppers, but also some mildly redeeming features; after such a horrible start, though, you might as well quietly leave the pitch. Please science journalists, try to do better next time.
Quantitative journalism with open data
This is the best news article I’ve seen in a while:
It’s the political cure-all for high gas prices: Drill here, drill now. But more U.S. drilling has not changed how deeply the gas pump drills into your wallet, math and history show.
A statistical analysis of 36 years of monthly, inflation-adjusted gasoline prices and U.S. domestic oil production by The Associated Press shows no statistical correlation between how much oil comes out of U.S. wells and the price at the pump.
Emphasis added. It’s a great example of quantitative journalism. They took the simple and oft-repeated statement that increased US oil production reduces domestic gas prices (known colloquially as “drill baby drill”), and they subjected it to a few simple statistical tests for correlation and causality. The result is that there is no correlation, or at least not one that is statistically significant. They tested for causality using the notion of Granger causality, and they found that if anything, higher prices Granger-causes more drilling, not the other way around!
And here’s the very best part of this article. They published the data and the analysis so that you can check the numbers yourself or reach your own conclusion. From the data, here is a scatter plot between relative change in price per gallon (inflation adjusted) and the relative change in production:
What’s more, they asked several independent experts, namely three statistics professors and a statistician at an energy consulting firm, and they all backed and corroborated the analysis.
Kudos to Jack Gillum and Seth Borenstein of the Associated Press for this wonderful article. I hope we can see more examples of quantitative journalism like this in the future, especially with open data.
Still a Lot to Do
Over hyped press releases are a standard for quantum computing research and a stable of what makes me sound like a grumpy old man. Really I’m not that grumpy (really! reall!), but I always forget to post the stuff which isn’t over hyped. For example, today I stumbled upon an article about a recent experimental implementation of a code for overcoming qubit loss done in China. In this article I find a graduate student whose was able to get a reasonable quote into the article:
While optimistic critics are acclaiming the newly achieved progress, the team, however, is cautiously calm. “There are still a lot to do before we can build a practically workable quantum computer. Qubit loss is not the only problem for QC; other types of decoherence are to be overcome,” remarks LU Chaoyang, a PhD student with the team. “But good news is, the loss-tolerant quantum codes demonstrated in our work can be further concatenated with other quantum error correction codes or decoherence-free space to tackle multiple decoherence, and may become a useful part for future implementations of quantum algorithms.”
Ah, that makes me happy.
Revolutionary Breakthough in Quantum Computing
Hot off the presses!
In an amazing breakthrough, which this press release has no room to describe in any real detail, scientists at research university BigU have made tremendous progress in the field of quantum computing. The results mean that quantum computers are one step closer to replacing your laptop computer
Continue reading “Revolutionary Breakthough in Quantum Computing”
Room Temperature?
Dear Digg, no, this article and press release do not mean that Scientists Invent Room Temperature Superconducting Material. It means that scientists have put molecular silane under hundreds of giga-Pascals presures (for comparison, atmospheric pressure is 100 kilo-Pascals) at a temperature of around 20 Kelvin, and gotten it to superconduct. While this is certainly cool, it is not “room temperature” as far as I can tell.
One day I was driving down the road and listening to AM radio when Paul Harvey came on and did his schtick (“and now you know, the rest of the story.”) At one point in the show, Harvey made a statement that physicists had recently discovered how to get superconductivity at room temperature. I almost drove off the road hearing this and ran home to see if it was true. Unfortunately it was not, and I will never, ever, forgive Paul Harvey for making me think this amazing discovery had been made. And now you know the rest of the…ah, whatever.
Your Dog is in Your Head
What’s that you say Dave? Dogs may be able to think about what their owners are thinking? Sounds like interesting research, Dave. If someone could actually find the real research and not just this science by press release, I would really like to read the paper, Dave! And then I’d like to chew it up and hide it in the backyard.