Last week’s plebeian scare-mongering about the world ending at the wraparound of the Mayan calendar did not distract sophisticated readers of gr-qc and quant-ph from a more arcane problem, the so-called Firewall Question. This concerns what happens to Alice when she falls through the event horizon of a large, mature black hole. Until recently it was thought that nothing special would happen to her other than losing her ability to communicate with the outside world, regardless of whether the black hole was old or young, provided it was large enough for space to be nearly flat at the horizon. But lately Almheiri, Marlof, Polchinski, and Sully argued (see also Preskill’s Quantum Frontiers post and especially the comments on it) that she instead would be vaporized instantly and painlessly as she crossed the horizon. From Alice’s point of view, hitting the firewall would be like dying in her sleep: her experience would simply end. Alice’s friends wouldn’t notice the firewall either, since they would either be outside the horizon where they couldn’t see her, or inside and also vaporized. So the firewall question, aside from being central to harmonizing no-cloning with black hole complementarity, has a delicious epistemological ambiguity.
Notwithstanding these conceptual attractions, firewalls are not a pressing practical problem, because the universe is far too young to contain any of the kind of black holes expected to have them (large black holes that have evaporated more than half their total mass).
A more worrisome kind of instant destruction, both practically and theoretically, is the possibility that the observable universe—the portion of the universe accessible to us—may be in a metastable state, and might decay catastrophically to a more stable ground state. Once nucleated, either spontaneously or through some ill-advised human activity, such a vacuum phase transition would propagate at the speed of light, annihilating the universe we know before we could realize—our universe would die in its sleep. Most scientists, even cosmologists, don’t worry much about this either, because our universe has been around so long that spontaneous nucleation appears less of a threat than other more localized disasters, such as a nearby supernova or collision with an asteroid. When some people, following the precautionary principle, tried to stop a proposed high-energy physics experiment at Brookhaven Lab because it might nucleate a vacuum phase transition or some other world-destroying disaster, prominent scientists argued that if so, naturally occurring cosmic-ray collisions would already have triggered the disaster long ago. They prevailed, the experiment was done, and nothing bad happened.
The confidence of most scientists, and laypeople, in the stability of the universe rests on gut-level inductive reasoning: the universe contains ample evidence (fossils, the cosmic microwave background, etc.) of having been around for a long time, and it hasn’t disappeared lately. Even my four year old granddaughter understands this. When she heard that some people thought the world would end on Dec 21, 2012, she said, “That’s silly. The world isn’t going to end.”
The observable universe is full of regularities, both obvious and hidden, that underlie the success of science, the human activity which the New York Times rightly called the best idea of the second millennium. Several months ago in this blog, in an effort to formalize the kind of organized complexity which science studies, I argued that a structure should be considered complex, or logically deep, to the extent that it contains internal evidence of a complicated causal history, one that would take a long time for a universal computer to simulate starting from an algorithmically random input.
Besides making science possible, the observable universe’s regularities give each of us our notion of “us”, of being one of several billion similar beings, instead of the universe’s sole sentient inhabitant. An extreme form of that lonely alternative, called a Boltzmann brain, is a hypothetical fluctuation arising within a large universe at thermal equilibrium, or in some other highly chaotic state, the fluctuation being just large enough to support a single momentarily functioning human brain, with illusory perceptions of an orderly outside world, memories of things that never happened, and expectations of a future that would never happen, because the brain would be quickly destroyed by the onslaught of its hostile actual environment. Most people don’t believe they are Boltzmann brains because in practice science works. If a Boltzmann brain observer lived long enough to explore some part of its environment not prerequisite to its own existence, it would find chaos there, not order, and yet we generally find order.
Over the last several decades, while minding their own business and applying the scientific method in a routine way, cosmologists stumbled into an uncomfortable situation: the otherwise successful theory of eternal inflation seemed to imply that tiny Boltzmann brain universes were more probable than big, real universes containing galaxies, stars, and people. More precisely, in these models, the observable universe is part of an infinite seething multiverse, within which real and fake universes each appear infinitely often, with no evident way of defining their relative probabilities—the so-called “measure problem”.
Cosmologists Rafael Bousso and Leonard Susskind and Yasunori Nomura (cf also a later paper) recently proposed a quantum solution to the measure problem, treating the inflationary multiverse as a superposition of terms, one for each universe, including all the real and fake universes that look more or less like ours, and many others whose physics is so different that nothing of interest happens there. Sean Carroll comments accessibly and with cautious approval on these and related attempts to identify the multiverse of inflation with that of many-worlds quantum mechanics.
Aside from the measure problem and the nature of the multiverse, it seems to me that in order to understand why the observed universe is complicated and orderly, we need to better characterize what a sentient observer is. For example, can there be a sentient observer who/which is not complex in the sense of logical depth? A Boltzmann brain would at first appear to be an example of this, because though (briefly) sentient it has by definition not had a long causal history. It is nevertheless logically deep, because despite its short actual history it has the same microanatomy as a real brain, which (most plausibly) has had a long causal history. The Boltzmann brain’s evidence of having had long history is thus deceptive, like the spurious evidence of meaning the protagonists in Borges’ Library of Babel find by sifting through mountains of chaotic books, until they find one with a few meaningful lines.
I am grateful to John Preskill and especially Alejandro Jenkins for helping me correct and improve early versions of this post, but of course take full responsibility for the errors and misconceptions it may yet contain.
Science Code Manifesto
Recently, one of the students here at U. Sydney and I had the frustrating experience of trying to reproduce a numerical result from a paper, but it just wasn’t working. The code used by the authors was regrettably not made publicly available, so once we were fairly sure that our code was correct, we didn’t know how to resolve the discrepancy. Luckily, in our small community, I knew the authors personally and we were able to figure out why the results didn’t match up. But as code becomes a larger and larger part of scientific projects, these sorts of problems will increase in frequency and severity.
What can we do about it?
A team of very smart computer scientists have come together and written the science code manifesto. It is short and sweet; the whole thing boils down to five simple principles of publishing code:
- Code
- All source code written specifically to process data for a published paper must be available to the reviewers and readers of the paper.
- Copyright
- The copyright ownership and license of any released source code must be clearly stated.
- Citation
- Researchers who use or adapt science source code in their research must credit the code’s creators in resulting publications.
- Credit
- Software contributions must be included in systems of scientific assessment, credit, and recognition.
- Curation
- Source code must remain available, linked to related materials, for the useful lifetime of the publication.
If you support this, and you want to help contribute to the solution, then please go and endorse the manifesto. Even more importantly, practice the five C’s the next time you publish a paper!
Are we ready for Venture Qapital?
From cnet and via Matt Liefer, comes news of a new venture capital firm, known as The Quantum Wave Fund. According to their website:
Quantum Wave Fund is a venture capital firm focused on seeking out early stage private companies with breakthrough quantum technology. Our mission is to help these companies capitalize on their opportunities and provide a platform for our investors to participate in the quantum technology wave.
The cnet article clarifies that “quantum technology” means “Security, new measurement devices, and new materials,” which seems about right for what we can expect to meaningfully commercialize in the near term. In fact, two companies (ID Quantique and
MagiQ) are already doing so. However, I think it is significant that ID Quantique’s first listed product uses AES-256 (but can be upgraded to use QKD) and MagiQ’s product list first describes technologies like waveform generation and single-photon detection before advertising their QKD technology at the bottom of the page.
It’ll be interesting to see where this goes. Already it has exposed several areas of my own ignorance. For example, from the internet, I learned that VCs want to get their money back in 10-12 years, which gives an estimate for how near-term the technologies are that we can expect investments in. Another area which I know little about, but is harder to google, is exactly what sort of commercial applications there are for the many technologies that are related to quantum information, such as precision measurement and timing. This question is, I think, going to be an increasingly important one for all of us.
QIP schedule up, early registration deadline ending soon
I’ll try to post something less boring soon, but for now wanted to mention that the schedule for QIP is now online, and that early registration ends on Dec 1.
QIP 2015??
The following is a public service announcement from the QIP steering committee. (The “service” is from you, by the way, not the committee.)
Have you ever wondered why the Quantum Information Processing conference seems to travel everywhere except to your hometown? No need to wonder anymore. It’s because you haven’t organized it yet!
The QIP steering committee would like to encourage anyone tentatively interested in hosting QIP 2015 to register their interest with one of us by email prior to QIP 2013 in Beijing. The way it works is that potential hosts present their cases at this year’s QIP, there is an informal poll of the QIP audience, and then soon after, the steering committee chooses between proposals.
Don’t delay!
By the way, QIP 2014 is in Barcelona, so a loose tradition would argue that 2015 should be in North (or South!) America. Why? Some might say fairness or precedent, but for our community, perhaps a better reason is to keep the Fourier transform nice and peaked.
MIT + postdoc opening
After 8 years at MIT, and a little over 7 years away, I will return to MIT this January to join the Physics faculty. I love the crazy nerdy atmosphere at MIT, and am really excited about coming back (although I’ll miss the wonderful people at UW).
Even if Cambridge doesn’t quite look like Sydney, it’s a magical place.
Combined with my fellow CTPers Eddie Farhi and Peter Shor, I will also have funding to hire a postdoc there. The other faculty doing the theory side of quantum information at MIT include Scott Aaronson, Ike Chuang, Jeffrey Goldstone and Seth Lloyd; and that’s not even getting into the experimentalists (like Orlando and Shapiro), or the math and CS people who sometimes think about quantum (like Kelner and Sipser).
The application deadline, including letters, is December 1.
You can apply here. Note that when it asks for “three senior physicists”, you should interpret that as “three strong researchers in quantum information, preferably senior and/or well-known.”
QIP 2013 accepted talks, travel grants
The accepted talk list for QIP 2013 is now online. Thomas Vidick has done a great job of breaking the list into categories and posting links to papers: see here. He missed only one that I’m aware of: Kamil Michnicki’s paper on stabilizer codes with power law energy barriers is indeed online at arXiv:1208.3496. Here are Thomas’s categories, together with the number of talks in each category.
- Ground states of local Hamiltonians (4)
- Cryptography (3)
- Nonlocality (6)
- Topological computing and error-correcting codes (4)
- Algorithms and query complexity (6)
- Information Theory (9)
- Complexity (2)
- Thermodynamics (2)
Other categorizations are also possible, and one important emerging trend (for some years now) is the way that the “information theory” has broadened far beyond quantum Shannon theory. To indulge in a little self-promotion, my paper 1210.6367 with Brandao is an example of how information-theoretic tools can be usefully applied to many contexts that do not involve sending messages at optimal rates.
It would be fascinating to see how these categories have evolved over the years. A cynic might say that our community is fad-driven, but instead I that the evolution in QIP topics represents our field working to find its identity and relevance.
On another note, travel grants are available to students and postdocs who want to attend QIP, thanks to funding from the NSF and other organizations. You don’t have to have a talk there, but being an active researcher in quantum information is a plus. Beware that the deadline is November 15 and this is also the deadline for reference letters from advisors.
So apply now!
Victory!
On this historic occasion, let me take this opportunity to congratulate tonight’s winner: Nate Silver, of the Mathematics Party. No words yet on a concession speech from the opposition.
Fans will be pleased to know that Nate is now on twitter.
Plagiarism horror story
Halloween is my favorite holiday: you aren’t strong armed into unhealthy levels of conspicuous consumption, the costumes and pumpkins are creative and fun, the autumn colors are fantastic, and the weather is typically quite pleasant (or at least it was in pre-climate change/hurricane Sandy days.) You don’t even have to travel at all! So in honor of Halloween, I’m going to tell a (true) horror story about…
Back in August, I was asked to referee a paper for a certain prestigious physics journal. The paper had already been reviewed by two referees, and while one referee was fairly clear that the paper should not be published, the other gave a rather weak rejection. The authors replied seeking the opinion of a third referee, and that’s when the editors contacted me.
I immediately noticed that something was amiss: the title of the paper was nearly identical to a paper that my co-authors and I had published in that same journal a couple of years earlier. In fact, out of 12 words in the title, the first 9 were taken verbatim. I’m sorry to say, but it further raised my hackles that the authors and their universities were unknown to me and came from a country with a reputation for rampant plagiarism. Proceeding to the abstract, I found that the authors had copied entire sentences, merely substituting some of the nouns and verbs as if it were a Mad Lib. Scrolling further, the authors copied an entire theorem, taking the equations in the proof symbol-by-symbol and line-by-line!
I told all of this to the editor and he of course rejected the paper, providing also an explanation of why and what constitutes plagiarism. A strange twist is that my original paper was actually cited by the copy. Perhaps the authors thought that if they cited my paper, then the copying wasn’t plagiarism? They had even mentioned my paper directly in their response to the original reports as supporting evidence that their paper should be published. (“You published this other paper which is nearly identical, so why wouldn’t you publish ours?”) Thus, at this point I was thinking that it’s possible they simply didn’t understand that their actions constituted plagiarism, and I was grateful that the editor had enlightened them.
Fast forward to today.
I receive another email from a different journal asking to referee a paper… the same paper. They had changed the title, but the abstract and copied theorem were still there. Bizarrely, they even added a fourth author. The zombie paper is back, and it wants to be published!
Of course, I can also raise my previous objections, and re-kill this zombie paper. And I’m considering directly contacting the authors. This clearly isn’t a scalable strategy, however.
It got me thinking. Is there a better way to combat plagiarism of academic papers? One thing that often works in changing people’s behavior is shame. My idea is, perhaps if we build a website where we publicly post the names and affiliations of offenders, then this will cause enough embarrassment to stem the tide. Sort of like the P vs. NP site for erroneous proofs.
What’s your best idea for how to deal with this problem?
21 = 3 * 7
…with high probability.
That joke was recycled, and in recent work, Bristol experimentalists have found a way to recycle photonic qubits to perform Shor’s algorithm using only $latex lceil log Nrceil + 1$ qubits. (Ok, so actually is was Kitaev who figured this out, as they are basically using Kitaev’s semi-classical QFT. But they found a way to do this with photons.) Further reductions on the number of qubits can be achieved by “compiling” to throw out the unused ones. In this case, we are left with a qubit and a qutrit. The feedback of the measurement is also replaced with the usual postselection. However, buried in the body of the paper is the main advance:
Our scheme therefore requires two consecutive photonic CNOT gates–something that has not previously been demonstrated–acting on qubit subspaces of the qutrit.
This is important progress, but until the factored numbers get a lot larger, the “size of number factored” yardstick is going to be a frustrating one. For example, how is it that factoring 143 on an NMR quantum computer takes only 4 qubits while factoring 15 takes 7?
Anyone have ideas for something more objective, like a quantum analogue of MIPS? (Incidentally, apparently MIPS is obsolete because of memory hierarchies.) Perhaps Grover’s algorithm would be good, as “compiling” is harder to justify, and noise is harder to survive.
Update:
In the comments, Tom Lawson points out the real significance of factoring 21. This is the smallest example of factoring in which the desired distribution is not uniformly random, thus making it possible to distinguish the successful execution of an experiment from the effects of noise.