Are articles in high-impact journals more like designer handbags, or monarch butterflies?

Monarch butterfly handbag
Monarch butterfly handbag (src)

US biologist Randy Schekman, who shared this year’s physiology and medicine Nobel prize, has made prompt use of his new bully pulpit. In
How journals like Nature, Cell and Science are damaging science: The incentives offered by top journals distort science, just as big bonuses distort banking
he singled out these “luxury” journals as a particularly harmful part of the current milieu in which “the biggest rewards follow the flashiest work, not the best,” and he vowed no longer to publish in them. An accompanying Guardian article includes defensive quotes from representatives of Science and Nature, especially in response to Schekman’s assertions that the journals favor controversial articles over boring but scientifically more important ones like replication studies, and that they deliberately seek to boost their impact factors by restricting the number of articles published, “like fashion designers who create limited-edition handbags or suits.”  Focusing on journals, his main concrete suggestion is to increase the role of open-access online journals like his elife, supported by philanthropic foundations rather than subscriptions. But Schekman acknowledges that blame extends to funding organizations and universities, which use publication in high-impact-factor journals as a flawed proxy for quality, and to scientists who succumb to the perverse incentives to put career advancement ahead of good science.  Similar points were made last year in Serge Haroche’s thoughtful piece on why it’s harder to do good science now than in his youth.   This, and Nature‘s recent story on Brazilian journals’ manipulation of impact factor statistics, illustrate how prestige journals are part of the solution as well as the problem.
Weary of people and institutions competing for the moral high ground in a complex terrain, I sought a less value-laden approach,  in which scientists, universities, and journals would be viewed merely as interacting IGUSes (information gathering and utilizing systems), operating with incomplete information about one another. In such an environment, reliance on proxies is inevitable, and the evolution of false advertising is a phenomenon to be studied rather than disparaged.  A review article on biological mimicry introduced me to some of the refreshingly blunt standard terminology of that field.  Mimicry,  it said,  involves three roles:  a model,  i.e.,  a living or material agent emitting perceptible signals, a mimic that plagiarizes the model, and a dupe whose senses are receptive to the model’s signal and which is thus deceived by the mimic’s similar signals.  As in human affairs, it is not uncommon for a single player to perform several of these roles simultaneously.

Cosmology meets Philanthropy — guest post by Jess Riedel

My colleague Jess Riedel recently attended a conference  exploring the connection between these seemingly disparate subjects, which led him to compose the following essay.–CHB
Impact_event

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.
I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”
Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.
These resources can probably be put to better use.  Bostrom points out that, assuming we don’t destroy ourselves, our descendants likely will one day spread through the universe.  Delaying our colonization of the Virgo Supercluster by one second forgoes about $latex 10^{16}$ human life-years. Each year, on average, an entire galaxywith its billions of starsis slipping outside of our cosmological event horizon, forever separating it from Earth-originating life.  Maybe we should get on with it?
But the careful reader will note that not everyone believes the supply of negentropy is well understood or even necessarily fixed, especially given the open questions in general relativity, cosmology, quantum mechanics, and (recently) black holes.  Changes in our understanding of these and other issues could have deep implications for the future.  And, as we shall see, for what we do tomorrow.
On the other side of the pond, two young investment analysts at Bridgewater Associates got interested in giving some of their new disposable income to charity. Naturally, they wanted to get something for their investment, and so they went looking for information about what charity would get them the most bang for their buck.   But it turned out that not too many people in the philanthropic world seemed to have many good answer.  A casual observer would even be forgiven for thinking that nobody really cared about what was actually getting done with the quarter trillion donated annually to charity.  And this is no small matter; as measured by just about any metric you choose—lives saved, seals unclubbed, children dewormed—charities vary by many orders of magnitude in efficiency.
This prompted them to start GiveWell, now considered by many esteemed commentators to be the premier charity evaluator.  One such commentator is Princeton philosopher Peter Singer, who proposed the famous thought experiment of the drowning child.  Singer is also actively involved with a larger movement that these days goes by the name “Effective Altruism”.  It’s founding question: If one wants to accomplish the most good in the world, what, precisely, should one be doing?  
You won’t be surprised that there is a fair amount of disagreement on the answer.  But what might surprise you is how disagreement about the fundamental normative questions involved (regardless of the empirical uncertainties) leads to dramatically different recommendations for action.    
A first key topic is animals.  Should our concern about human suffering be traded off against animal suffering? Perhaps weighted by neural mass?  Are we responsible for just the animals we farm, or the untold number suffering in the wild?  Given Nature’s fearsome indifference, is the average animal life even worth living?  Counterintuitive results abound, like the argument that we should eat more meat because animal farming actually displaces much more wild animal suffering than it creates.
Putting animals aside, we will still need to balance “suffering averted”  with “flourishing created”.  How many malaria deaths will we allow to preserve a Rembrandt?  Very, very bad futures controlled by totalitarian regimes are conceivable; should we play it safe and blow up the sun?
But the accounting for future people leads to some of the most arresting ideas.  Should we care about people any less just because they will live in the far future?  If their existence is contingent on our action, is it bad for them to not exist?  Here, we stumble on deep issues in population ethics.  Legendary Oxford philosopher Derek Parfit formulated the argument of the ”repugnant conclusion”.  It casts doubt on the idea that a billion rich, wealthy people living sustainably for millennia on Earth would be as ideal as you might initially think. 
(Incidentally, the aim of such arguments is not to convince you of some axiomatic position that you find implausible on its face, e.g. “We should maximize the number of people who are born”.  Rather, the idea is to show you that your own already-existing beliefs about the badness of letting people needlessly suffer will probably compel you to act differently, if only you reflect carefully on it.)
The most extreme end of this reasoning brings us back to Bostrom, who points out that we find ourselves at a pivotal time in history. Excepting the last century, humans have existed for a million years without the ability to cause our own extinction.  In probably a few hundred years—or undoubtedly in a few thousand—we will have the ability to create sustainable settlements on other worlds, greatly decreasing the chance that a calamity could wipe us out. In this cosmologically narrow time window we could conceivably extinguish our potentially intergalactic civilization through nuclear holocaust or other new technologies.  Even tiny, well-understood risks like asteroid and comet strikes (probability of extinction event: ~$latex 10^{-7}$ per century) become seriously compelling when the value of the future is brought to bear. Indeed, between $latex 10^{35}$ and $latex 10^{58}$ future human lives hang in the balance, so it’s worth thinking hard about.
So why are you on Facebook when you could be working on Wall Street and donating all your salary to avert disaster? Convincingly dodging this argument is harder than you might guess.  And there are quite a number of smart people who bite the bullet.
 

Scientific Birthdays –Toffoli gate honored on namesake's 70'th

P1030110aTommaso Toffoli’s 3-input 3-output logic gate, central to the theory of reversible and quantum computing, recently featured on a custom cake made for his 70’th birthday.
Nowadays scientists’ birthday celebrations often take the form of informal mini-conferences, festschrifts without the schrift.  I have had the honor of attending several this year, including ones for John Preskill’s and Wojciech Zurek’s 60’th birthdays and a joint 80’th birthday conference for Myriam Sarachik and Daniel Greenberger,  both physics professors at City College of New York.  At that party I learned that Greenberger and Sarachick have known each other since high school.  Neither has any immediate plans for retirement.
Greenberger Sarachik 80th birthday symposium
 

Resolution of Toom's rule paradox

A few days ago our Ghost Pontiff Dave Bacon wondered how Toom’s noisy but highly fault-tolerant 2-state classical cellular automaton  can get away with violating the Gibbs phase rule, according to which a finite-dimensional locally interacting system, at generic points in its phase diagram, can have only only one thermodynamically stable phase.  The Gibbs rule is well illustrated by the low-temperature ferromagnetic phases of the classical Ising model in two or more dimensions:  both phases are stable at zero magnetic field, but an arbitrarily small field breaks the degeneracy between their free energies, making one phase metastable with respect to nucleation and growth of islands of the other.  In the Toom model, by contrast, the two analogous phases are absolutely stable over a finite area of the phase diagram, despite biased noise that would seem to favor one phase over the other.  Of course Toom’s rule is not microscopically reversible,  so it is not bound by laws of equilibrium thermodynamics.

Nevertheless, as Dave points out, the distribution of histories of any locally interacting d-dimensional system, whether microscopically reversible or not, can be viewed  as an equilibrium Gibbs distribution of a d+1 dimensional system, whose local Hamiltonian is chosen so that the d dimensional system’s transition probabilities are given by Boltzmann exponentials of interaction energies between consecutive time slices.  So it might seem, looking at it from the d+1 dimensional viewpoint, that the Toom model ought to obey the Gibbs phase rule too.
The resolution of this paradox, described in my 1985 paper with Geoff Grinstein,  lies in the fact that the d to d+1 dimensional mapping is not surjective.  Rather it is subject to the normalization constraint that for every configuration X(t) at time t, the sum over configurations X(t+1) at time t+1 of transition probabilities P(X(t+1)|X(t)) is exactly 1.    This in turn forces the d+1 dimensional free energy to be identically zero, regardless of how the d dimensional system’s transition probabilities are varied.  The Toom model is able to evade the Gibbs phase rule because

  • being irreversible, its d dimensional free energy is ill-defined, and
  • the normalization constraint allows two phases to have exactly equal  d+1 dimensional free energy despite noise locally favoring one phase or the other.

Just outside the Toom model’s bistable region is a region of metastability (roughly within the dashed lines in the above phase diagram) which can be given an interesting interpretation in terms of the  d+1 dimensional free energy.  According to this interpretation, a metastable phase’s free energy is no longer zero, but rather -ln(1-Γ)≈Γ, where Γ is the nucleation rate for transitions leading out of the metastable phase.  This reflects the fact that the transition probabilities no longer sum to one, if one excludes transitions causing breakdown of the metastable phase.  Such transitions, whether the underlying d-dimensional model is reversible (e.g. Ising) or not (e.g. Toom), involve critical fluctuations forming an island of the favored phase just big enough to avoid being collapsed by surface tension.  Such critical fluctuations occur at a rate
Γ≈ exp(-const/s^(d-1))
where s>0 is the distance in parameter space from the bistable region (or in the Ising example, the bistable line).  This expression, from classical homogeneous nucleation theory, makes the d+1 dimensional free energy a smooth but non-analytic function of s, identically zero wherever a phase is stable, but lifting off very smoothly from zero as one enters the region of metastability.
 

 
Below, we compare  the d and d+1 dimensional free energies of the Ising model with the d+1 dimensional free energy of the Toom model on sections through the bistable line or region of the phase diagram.

We have been speaking so far only of classical models.  In the world of quantum phase transitions another kind of d to d+1 dimensional mapping is much more familiar, the quantum Monte Carlo method, nicely described in these lecture notes, whereby a d dimensional zero-temperature quantum system is mapped to a d+1 dimensional finite-temperature classical Monte Carlo problem.   Here the extra dimension, representing imaginary time, is used to perform a path integral, and unlike the classical-to-classical mapping considered above, the mapping is bijective, so that features of the d+1 dimensional classical system can be directly identified with corresponding ones of the d dimensional quantum one.
 
 

Non-chaotic irregularity

In principle, barring the intervention of chance, identical causes lead to identical effects.  And except in chaotic systems, similar causes lead to similar effects.  Borges’ story “Pierre Menard” exemplifies an extreme version of this idea: an early 20’th century writer studies Cervantes’ life and times so thoroughly that he is able to recreate several chapters of “Don Quixote” without mistakes and without consulting the original.
Meanwhile, back at the ShopRite parking lot in Croton on Hudson, NY,  they’d installed half a dozen identical red and white parking signs, presumably all from the same print run, and all posted in similar environments, except for two in a sunnier location.

The irregular patterns of cracks that formed as the signs weathered were so similar that at first I thought the cracks had also been printed, but then I noticed small differences. The sharp corners on letters like S and E,  apparently points of high stress, usually triggered near-identical cracks in each sign, but not always, and in the sunnier signs many additional fine cracks formed. 
Another example of reproducibly irregular dynamics was provided over 30 years ago by Ahlers and Walden’s experiments on convective turbulence, where a container of normal liquid helium, heated from below, exhibited nearly the same sequence of temperature fluctuations in several runs of the experiment.

 
 
 

Simple circuit "factors" arbitrarily large numbers

Last Thursday, at the QIP rump session in Beijing, John Smolin described recent work with Graeme Smith and Alex Vargo [SSV] showing that arbitrarily large numbers $latex N$ can be factored by using this constant-sized quantum circuit

to implement a compiled version of Shor’s algorithm.  The key to SSV’s breathtaking improvement is to choose a base for exponentiation, $latex a$, such that the function $latex a^x bmod N$ is periodic with period 2.  (This kind of  simplification—using a base with a short period such as 2, 3, or 4—has in fact been used in all experimental demonstrations of Shor’s algorithm that we know of).  SSV  go on to show that an $latex a$ with period 2 exists for every product of distinct primes $latex N=pq$, and therefore that the circuit above can be used to factor any such number, however large.  The problem, of course, is that in order to find a 2-periodic base $latex a$, one needs to know the factorization of $latex N$. After pointing this out, and pedantically complaining that any process requiring the answer to be known in advance ought not to be called compilation, the authors forge boldly on and note that their circuit can be simplified even further to a classical fair coin toss, giving a successful factorization whenever it is iterated sufficiently many times to obtain both a Head and a Tail among the outcomes (like having enough kids to have both a girl and a boy).   Using a penny and two different US quarters, they successfully factor 15, RSA-768, and a 20,000-bit number of their own invention by this method, and announce plans for implementing the full circuit above on state-of-the art superconducting hardware.  When I asked the authors of SSV what led them into this line of research, they said they noticed that the number of qubits used to do Shor demonstrations has been decreasing over time, even as the number being factored increased from 15 to 21, and they wanted to understand why.  Alas news travels faster than understanding—there have already been inquiries as to whether SSV might provide a practical way of factoring without the difficulty and expense of building a large-scale quantum computer.

Apocalypses, Firewalls, and Boltzmann Brains


Last week’s plebeian scare-mongering about the world ending at the wraparound of the Mayan calendar did not distract sophisticated readers of gr-qc and quant-ph from a more arcane problem, the so-called Firewall Question.  This concerns what happens to Alice when she falls through the event horizon of a large, mature black hole.  Until recently it was thought that nothing special would happen to her other than losing her ability to communicate with the outside world, regardless of whether the black hole was old or young, provided it was large enough for space to be nearly flat at the horizon.  But lately  Almheiri, Marlof, Polchinski, and Sully argued (see also Preskill’s Quantum Frontiers post and especially the comments on it) that she instead would be vaporized instantly and painlessly as she crossed the horizon.  From Alice’s point of view, hitting the firewall would be like dying in her sleep: her experience would simply end.  Alice’s friends wouldn’t notice the firewall either, since they would either be outside the horizon where they couldn’t see her, or inside and also vaporized. So the firewall question, aside from being central to harmonizing no-cloning with black hole complementarity, has a delicious epistemological ambiguity.
Notwithstanding these conceptual attractions, firewalls are not a pressing practical problem, because the universe is far too young to contain any of the kind of black holes expected to have them (large black holes that have evaporated more than half their total mass).
A more worrisome kind of instant destruction, both practically and theoretically, is the possibility that the observable universe—the portion of the universe accessible to us—may be in a metastable state, and might decay catastrophically to a more stable ground state.    Once nucleated, either spontaneously or through some ill-advised human activity,  such a vacuum phase transition would propagate at the speed of light, annihilating the universe we know before we could realize—our universe would die in its sleep.  Most scientists, even cosmologists, don’t worry much about this either, because our universe has been around so long that spontaneous nucleation appears less of a threat than other more localized disasters, such as a nearby supernova or collision with an asteroid.  When some people, following the precautionary principle,  tried to stop a proposed high-energy physics experiment at Brookhaven Lab because it might nucleate a vacuum phase transition or some other world-destroying disaster, prominent scientists argued that if so, naturally occurring cosmic-ray collisions would already have triggered the disaster long ago.  They prevailed, the experiment was done, and nothing bad happened.
The confidence of most scientists, and laypeople, in the stability of the universe rests on gut-level inductive reasoning: the universe contains ample evidence (fossils, the cosmic microwave background, etc.) of having been around for a long time, and it hasn’t disappeared lately.  Even my four year old granddaughter understands this.  When she heard that some people thought the world would end on Dec 21, 2012, she said, “That’s silly.  The world isn’t going to end.”
The observable universe is full of regularities, both obvious and hidden, that underlie the success of science, the human activity which the New York Times rightly called the best idea of the second millennium.  Several months ago in this blog, in an effort to formalize the kind of organized complexity which science studies, I argued that a structure should be considered complex, or logically deep, to the extent that it contains internal evidence of a complicated causal history, one that would take a long time for a universal computer to simulate starting from an algorithmically random input.
Besides making science possible, the observable universe’s regularities give each of us our notion of “us”, of being one of several billion similar beings, instead of the universe’s sole sentient inhabitant.  An extreme form of that lonely alternative, called a Boltzmann brain, is a hypothetical fluctuation arising within a large universe at thermal equilibrium, or in some other highly chaotic state,  the fluctuation being just large enough to support a single momentarily functioning human brain, with illusory perceptions of an orderly outside world, memories of things that never happened, and expectations of a future that would never happen, because the brain would be quickly destroyed by the onslaught of its hostile actual environment.  Most people don’t believe they are Boltzmann brains because in practice science works.   If a Boltzmann brain observer lived long enough to explore some part of its environment not prerequisite to its own existence, it would find chaos there, not order, and yet we generally find order.
Over the last several decades, while minding their own business and applying the scientific method in a routine way, cosmologists stumbled into an uncomfortable situation: the otherwise successful theory of eternal inflation seemed to imply that tiny Boltzmann brain universes were more probable than big, real universes containing galaxies, stars, and people.  More precisely, in these models, the observable universe is part of an infinite seething multiverse, within which real and fake universes each appear infinitely often, with no evident way of defining their relative probabilities—the so-called “measure problem”.
Cosmologists Rafael Bousso and Leonard Susskind and Yasunori Nomura (cf also a later paper) recently proposed a quantum solution to the measure problem, treating the inflationary multiverse as a superposition of terms, one for each universe, including all the real and fake universes that look more or less like ours, and many others whose physics is so different that nothing of interest happens there.  Sean Carroll comments accessibly and with cautious approval on these and related attempts to identify the multiverse of inflation with that of many-worlds quantum mechanics.
Aside from the measure problem and the nature of the multiverse, it seems to me that in order to understand why the observed universe is complicated and orderly, we need to better characterize what a sentient observer is.  For example, can there be a sentient observer who/which is not complex in the sense of logical depth?  A Boltzmann brain would at first appear to be an example of this, because though (briefly) sentient it has by definition not had a long causal history.  It is nevertheless logically deep, because despite its  short actual history it has the same microanatomy as a real brain, which (most plausibly) has had a long causal history.   The Boltzmann brain’s  evidence of having had long history is thus deceptive, like the spurious evidence of meaning the protagonists in Borges’ Library of Babel find by sifting through mountains of chaotic books, until they find one with a few meaningful lines.
I am grateful to John Preskill and especially Alejandro Jenkins for helping me correct and improve early versions of this post, but of course take full responsibility for the errors and misconceptions it may yet contain.

Haroche and Wineland win Physics Nobel

David Wineland
Serge Haroche

The physics prize was shared between experimentalists Serge Haroche and David Wineland, longtime leaders in the study of atom-photon interaction.  In recent decades both have honed their techniques to meet the challenges and opportunities opened by “quantum information science” which aims to rebuild the theory and practice of communication and computation on quantum foundations.  This change of viewpoint was led by theorists, beginning with John Bell, and was initially regarded skeptically not only by information theorists and computer scientists, on whose turf it encroached, but even by many physicists, who saw a lot of theorizing, verging on philosophy, with little practice to back it up.  Haroche, working often with Rydberg atoms and microwave cavities, and Wineland, with trapped ions and optical fields, took the new approach seriously, and over many years have provided much of the solid foundation of practice that has by now has earned the field the right to be taken seriously.  At the same time both researchers have done their part to restrain the inevitable hype.    A decade and a half ago Haroche, in articles like “Quantum Computing: Dream or Nightmare” pointed out how difficult building a quantum computer would be, while always believing it possible in principle, and in the mean time produced, with his group, an impressive stream of experimental results and technical improvements  that made it ever more practical.  In the same vein, Wineland, when asked if ion traps were the right hardware for building a quantum computer, answered that whatever advantage they had was like being 10 feet ahead at the start of a 10 mile race.  Then like Haroche he went ahead making steady progress in the control and measurement of individual particles, with applications quite apart from that distant goal.
Both men are consummate experimentalists, finding and adapting whatever it takes.  I visited Wineland’s lab about a decade ago and noticed a common dishwashing glove (right handed and light blue, as I recall) interposed between the ion trap’s optical window and a CCD camera focused the ions within.   I asked David what its function was among all the more professional looking equipment.   He said this particular brand of gloves happened to be quite opaque with a matte black inside as good as anything he could get from an optics catalog, meanwhile combining moderate flexibility with sufficient rigidity to stay out of the way of the light path, unlike, say, a piece of black velvet.  Indeed the cut-off thumb fitted nicely onto the optical window, and the wrist was snugly belted around the front of the camera, leaving the fingers harmlessly but ludicrously poking out at the side.  The physics Nobel has occasioned a lot of press coverage, much of it quite good in conveying the excitement of quantum information science, while restraining unrealistic expectations.   We especially like Jason Palmer’s story from earlier this year which the BBC resurrected to explain a field which this Nobel has suddenly thrust into the limelight.   We congratulate Haroche and Wineland as deserving and timely winners of this first Nobel given to people who could fairly be described, and would now describe themselves, as quantum information scientists.
 

A way around Nobel's 3-person limit

 
Die illustrating probabilistic Nobel
Most  fields of science have become increasingly collaborative over the last century, sometimes forcing the Nobel Prizes to unduly truncate the list of recipients, or neglect major discoveries involving  more than three discoverers. In January we pointed out a possible escape from this predicament: choose three official laureates at random from a larger list, then publish the entire list, along with the fact that the official winners had been chosen randomly from it.  The money of course would go to the three official winners, but public awareness that they were no more worthy than the others might induce them to share it.  A further refinement would be to use (and perhaps publish) weighted probabilities, allowing credit to be allocated unequally.   If the Nobel Foundation’s lawyers could successfully argue that such randomization was consistent with Nobel’s will, the Prizes would better reflect the collaborative nature of modern science, at the same time lessening unproductive competition among  scientists to make it into the top three.