happy deadline everyone!

The main proof in one of my QIP submissions has developed a giant hole.
Hopefully the US Congress does a better job with its own, somewhat higher stakes, deadline. In many ways their job is easier. They can just submit the same thing as last year and they don’t need to compress their result into three pages. Unfortunately, things are not looking good for them either!
Good luck to all of us.

When I Was Young, I Thought It Would Be Different….

When I was in graduate school (back before the earth cooled) I remember thinking the following thoughts:

  1. Quantum computing is a new field filled with two types of people: young people dumb enough to not know they weren’t supposed to be studying quantum computing, and old, tenured people who understood that tenure meant that they could work on what interested them, even when their colleagues thought they were crazy.
  2. Younger people are less likely to have overt biases against woman.  By this kind of bias I mean that like the math professor at Caltech who told one of my friends that woman were bad at spatial reasoning (a.k.a. Jerks).  Maybe these youngsters even had less hidden bias?
  3. Maybe, then, because the field was new, quantum computing would be a discipline in which the proportion of woman was higher than the typical rates of their parent disciplines, physics and in computer science?

In retrospect, like most of the things I have thought in my life, this line of reasoning was naive.
Reading Why Are There So Few Women In Science in the New York Times reminded me about these thoughts of my halcyon youth, and made me dig through the last few QIP conferences to get one snapshot (note that I just say one, internet comment troll) of the state of woman in the quantum computing (theory) world:

Year Speakers Woman Speakers Percent
2013 41 1 2.4
2012 43 2 4.7
2011 40 3 7.5
2010 39 4 10.2
2009 40 1 2.5

Personally, it’s hard to read these numbers and not feel a little disheartened.

Important upcoming deadlines

As part of our ongoing service to the quantum information community, we here at the Quantum Pontiff would be remiss if we didn’t remind you of important upcoming deadlines. We all know that there is a certain event coming in February of 2014, and that we had better prepare for it; the submission deadline is fast approaching.
Therefore, let me take the opportunity to remind you that the deadline to submit to the special issue of the journal Symmetry called “Physics based on two-by-two matrices” is 28 February 2014.

Articles based on two-by-two matrices are invited. … It is generally assumed that the mathematics of this two-by-two matrix is well known. Get the eigenvalues by solving a quadratic equation, and then diagonalize the matrix by a rotation. This is not always possible. First of all, there are two-by-two matrixes that cannot be diagonalized. For some instances, the rotation alone is not enough for us to diagonalize the matrix. It is thus possible to gain a new insight to physics while dealing with these mathematical problems.

I, for one, am really looking forward to this special issue. And lucky for us, it will be open access, with an article processing charge of only 500 Swiss Francs. That’s just 125 CHF per entry of the matrix! Maybe we’ll gain deep new insights about such old classics as $latex \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$, or tackle the troublesome and non-normal beast, $latex \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$. Who knows? Please put any rumors you have about great new 2×2 matrix results in the comments.

Why I Left Academia

TLDR: scroll here for the pretty interactive picture.
Over two years ago I abandoned my post at the University of Washington as a assistant research professor studying quantum computing and started a new career as a software developer for Google. Back when I was a denizen of the ivory tower I used to daydream that when I left academia I would write a long “Jerry Maguire”-esque piece about the sordid state of the academic world, of my lot in that world, and how unfair and f**ked up it all is. But maybe with less Tom Cruise. You know the text, the standard rebellious view of all young rebels stuck in the machine (without any mirror.) The song “Mad World” has a lyric that I always thought summed up what I thought it would feel like to leave and write such a blog post: “The dreams in which I’m dying are the best I’ve ever had.”
But I never wrote that post. Partially this was because every time I thought about it, the content of that post seemed so run-of-the-mill boring that I feared my friends who read it would never ever come visit me again after they read it. The story of why I left really is not that exciting. Partially because writing a post about why “you left” is about as “you”-centric as you can get, and yes I realize I have a problem with ego-centric ramblings. Partially because I have been busy learning a new career and writing a lot (omg a lot) of code. Partially also because the notion of “why” is one I—as a card carrying ex-Physicist—cherish and I knew that I could not possibly do justice to giving a decent “why” explanation.
Indeed: what would a “why” explanation for a life decision such as the one I faced look like? For many years when I would think about this I would simply think “well it’s complicated and how can I ever?” There are, of course, the many different components that you think about when considering such decisions. But then what do you do with them? Does it make sense to think about them as probabilities? “I chose to go to Caltech, 50 percent because I liked physics, and 50 percent because it produced a lot Nobel prize winners.” That does not seem very satisfying.
Maybe the way to do it is to phrase the decisions in terms of probabilities that I was assigning before making the decision. “The probability that I’ll be able to contribute something to physics will be 20 percent if I go to Caltech versus 10 percent if I go to MIT.” But despite what some economists would like to believe there ain’t no way I ever made most decisions via explicit calculation of my subjective odds. Thinking about decisions in terms of what an actor feels each decision would do to increase his/her chances of success feels better than just blindly associating probabilities to components in a decision, but it also seems like a lie, attributing math where something else is at play.
So what would a good description of the model be? After pondering this for a while I realized I was an idiot (for about the eighth time that day. It was a good day.) The best way to describe how my brain was working is, of course, nothing short than my brain itself. So here, for your amusement, is my brain (sorry, only tested using Chrome). Yes, it is interactive.

Cosmology meets Philanthropy — guest post by Jess Riedel

My colleague Jess Riedel recently attended a conference  exploring the connection between these seemingly disparate subjects, which led him to compose the following essay.–CHB
Impact_event

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.
I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”
Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.
These resources can probably be put to better use.  Bostrom points out that, assuming we don’t destroy ourselves, our descendants likely will one day spread through the universe.  Delaying our colonization of the Virgo Supercluster by one second forgoes about $latex 10^{16}$ human life-years. Each year, on average, an entire galaxywith its billions of starsis slipping outside of our cosmological event horizon, forever separating it from Earth-originating life.  Maybe we should get on with it?
But the careful reader will note that not everyone believes the supply of negentropy is well understood or even necessarily fixed, especially given the open questions in general relativity, cosmology, quantum mechanics, and (recently) black holes.  Changes in our understanding of these and other issues could have deep implications for the future.  And, as we shall see, for what we do tomorrow.
On the other side of the pond, two young investment analysts at Bridgewater Associates got interested in giving some of their new disposable income to charity. Naturally, they wanted to get something for their investment, and so they went looking for information about what charity would get them the most bang for their buck.   But it turned out that not too many people in the philanthropic world seemed to have many good answer.  A casual observer would even be forgiven for thinking that nobody really cared about what was actually getting done with the quarter trillion donated annually to charity.  And this is no small matter; as measured by just about any metric you choose—lives saved, seals unclubbed, children dewormed—charities vary by many orders of magnitude in efficiency.
This prompted them to start GiveWell, now considered by many esteemed commentators to be the premier charity evaluator.  One such commentator is Princeton philosopher Peter Singer, who proposed the famous thought experiment of the drowning child.  Singer is also actively involved with a larger movement that these days goes by the name “Effective Altruism”.  It’s founding question: If one wants to accomplish the most good in the world, what, precisely, should one be doing?  
You won’t be surprised that there is a fair amount of disagreement on the answer.  But what might surprise you is how disagreement about the fundamental normative questions involved (regardless of the empirical uncertainties) leads to dramatically different recommendations for action.    
A first key topic is animals.  Should our concern about human suffering be traded off against animal suffering? Perhaps weighted by neural mass?  Are we responsible for just the animals we farm, or the untold number suffering in the wild?  Given Nature’s fearsome indifference, is the average animal life even worth living?  Counterintuitive results abound, like the argument that we should eat more meat because animal farming actually displaces much more wild animal suffering than it creates.
Putting animals aside, we will still need to balance “suffering averted”  with “flourishing created”.  How many malaria deaths will we allow to preserve a Rembrandt?  Very, very bad futures controlled by totalitarian regimes are conceivable; should we play it safe and blow up the sun?
But the accounting for future people leads to some of the most arresting ideas.  Should we care about people any less just because they will live in the far future?  If their existence is contingent on our action, is it bad for them to not exist?  Here, we stumble on deep issues in population ethics.  Legendary Oxford philosopher Derek Parfit formulated the argument of the ”repugnant conclusion”.  It casts doubt on the idea that a billion rich, wealthy people living sustainably for millennia on Earth would be as ideal as you might initially think. 
(Incidentally, the aim of such arguments is not to convince you of some axiomatic position that you find implausible on its face, e.g. “We should maximize the number of people who are born”.  Rather, the idea is to show you that your own already-existing beliefs about the badness of letting people needlessly suffer will probably compel you to act differently, if only you reflect carefully on it.)
The most extreme end of this reasoning brings us back to Bostrom, who points out that we find ourselves at a pivotal time in history. Excepting the last century, humans have existed for a million years without the ability to cause our own extinction.  In probably a few hundred years—or undoubtedly in a few thousand—we will have the ability to create sustainable settlements on other worlds, greatly decreasing the chance that a calamity could wipe us out. In this cosmologically narrow time window we could conceivably extinguish our potentially intergalactic civilization through nuclear holocaust or other new technologies.  Even tiny, well-understood risks like asteroid and comet strikes (probability of extinction event: ~$latex 10^{-7}$ per century) become seriously compelling when the value of the future is brought to bear. Indeed, between $latex 10^{35}$ and $latex 10^{58}$ future human lives hang in the balance, so it’s worth thinking hard about.
So why are you on Facebook when you could be working on Wall Street and donating all your salary to avert disaster? Convincingly dodging this argument is harder than you might guess.  And there are quite a number of smart people who bite the bullet.
 

Scientific Birthdays –Toffoli gate honored on namesake's 70'th

P1030110aTommaso Toffoli’s 3-input 3-output logic gate, central to the theory of reversible and quantum computing, recently featured on a custom cake made for his 70’th birthday.
Nowadays scientists’ birthday celebrations often take the form of informal mini-conferences, festschrifts without the schrift.  I have had the honor of attending several this year, including ones for John Preskill’s and Wojciech Zurek’s 60’th birthdays and a joint 80’th birthday conference for Myriam Sarachik and Daniel Greenberger,  both physics professors at City College of New York.  At that party I learned that Greenberger and Sarachick have known each other since high school.  Neither has any immediate plans for retirement.
Greenberger Sarachik 80th birthday symposium
 

Resolution of Toom's rule paradox

A few days ago our Ghost Pontiff Dave Bacon wondered how Toom’s noisy but highly fault-tolerant 2-state classical cellular automaton  can get away with violating the Gibbs phase rule, according to which a finite-dimensional locally interacting system, at generic points in its phase diagram, can have only only one thermodynamically stable phase.  The Gibbs rule is well illustrated by the low-temperature ferromagnetic phases of the classical Ising model in two or more dimensions:  both phases are stable at zero magnetic field, but an arbitrarily small field breaks the degeneracy between their free energies, making one phase metastable with respect to nucleation and growth of islands of the other.  In the Toom model, by contrast, the two analogous phases are absolutely stable over a finite area of the phase diagram, despite biased noise that would seem to favor one phase over the other.  Of course Toom’s rule is not microscopically reversible,  so it is not bound by laws of equilibrium thermodynamics.

Nevertheless, as Dave points out, the distribution of histories of any locally interacting d-dimensional system, whether microscopically reversible or not, can be viewed  as an equilibrium Gibbs distribution of a d+1 dimensional system, whose local Hamiltonian is chosen so that the d dimensional system’s transition probabilities are given by Boltzmann exponentials of interaction energies between consecutive time slices.  So it might seem, looking at it from the d+1 dimensional viewpoint, that the Toom model ought to obey the Gibbs phase rule too.
The resolution of this paradox, described in my 1985 paper with Geoff Grinstein,  lies in the fact that the d to d+1 dimensional mapping is not surjective.  Rather it is subject to the normalization constraint that for every configuration X(t) at time t, the sum over configurations X(t+1) at time t+1 of transition probabilities P(X(t+1)|X(t)) is exactly 1.    This in turn forces the d+1 dimensional free energy to be identically zero, regardless of how the d dimensional system’s transition probabilities are varied.  The Toom model is able to evade the Gibbs phase rule because

  • being irreversible, its d dimensional free energy is ill-defined, and
  • the normalization constraint allows two phases to have exactly equal  d+1 dimensional free energy despite noise locally favoring one phase or the other.

Just outside the Toom model’s bistable region is a region of metastability (roughly within the dashed lines in the above phase diagram) which can be given an interesting interpretation in terms of the  d+1 dimensional free energy.  According to this interpretation, a metastable phase’s free energy is no longer zero, but rather -ln(1-Γ)≈Γ, where Γ is the nucleation rate for transitions leading out of the metastable phase.  This reflects the fact that the transition probabilities no longer sum to one, if one excludes transitions causing breakdown of the metastable phase.  Such transitions, whether the underlying d-dimensional model is reversible (e.g. Ising) or not (e.g. Toom), involve critical fluctuations forming an island of the favored phase just big enough to avoid being collapsed by surface tension.  Such critical fluctuations occur at a rate
Γ≈ exp(-const/s^(d-1))
where s>0 is the distance in parameter space from the bistable region (or in the Ising example, the bistable line).  This expression, from classical homogeneous nucleation theory, makes the d+1 dimensional free energy a smooth but non-analytic function of s, identically zero wherever a phase is stable, but lifting off very smoothly from zero as one enters the region of metastability.
 

 
Below, we compare  the d and d+1 dimensional free energies of the Ising model with the d+1 dimensional free energy of the Toom model on sections through the bistable line or region of the phase diagram.

We have been speaking so far only of classical models.  In the world of quantum phase transitions another kind of d to d+1 dimensional mapping is much more familiar, the quantum Monte Carlo method, nicely described in these lecture notes, whereby a d dimensional zero-temperature quantum system is mapped to a d+1 dimensional finite-temperature classical Monte Carlo problem.   Here the extra dimension, representing imaginary time, is used to perform a path integral, and unlike the classical-to-classical mapping considered above, the mapping is bijective, so that features of the d+1 dimensional classical system can be directly identified with corresponding ones of the d dimensional quantum one.
 
 

A Paradox of Toom's Rule?

Science is slow.  You can do things like continue a conversation with yourself (and a few commenters) that started in 2005.  Which is what I’m now going to do 🙂  The below is probably a trivial observation for one of the cardinals, but I find it kind of interesting.
Let’s begin by recalling the setup.  Toom’s rule is a cellular automata rule for a two dimensional cellular automata on a square grid.  Put +1 and -1’s on the vertices of a square grid, and then use the following update rule at each step: “Update the value with the majority vote of your own state, the state of your neighbor to the north, and the state of your neighbor to the east.”  A few steps of the rule are shown here with +1 as white and -1 as blue:
Toom's RuleAs you can see Toom’s rule “shrinks” islands of “different” states (taking away such different cells from the north and east sides of such an island.)  It is this property which gives Toom’s rule some cool properties in the presence of noise.
So now consider Toom’s rule, but with noise.  Replace Toom’s update rule with the rule followed by, for each and every cell a noise process.  For example this noise could be to put the cell into state +1 with p percent probability and -1 with q percent probability.  Suppose now you are trying to store information in the cellular automata.  You start out at time zero, say, in the all +1 state.  Then let Toom’s rule with noise run.  If p=q and these values are below a threshold, then if you start in the +1 state you will remain in a state with majority +1 with a probability that goes to one exponentially as a function of the system size.  Similarly if you start in -1.  The cool thing about Toom’s rule is that this works not just for p=q, but also for some values of p not equal to q (See here for a picture of the phase diagram.)  That is there are two stable states in this model, even for biased noise.
Contrast Toom’s rule with a two dimensional Ising model which is in the process of equilibriating to temperature T.  If this model has no external field applied, then like Toom’s rule there is a phase where the mostly +1 and the mostly -1 states are stable and coexist.  These are from zero temperature (no dynamics) to a threshold temperature T, the critical temperature of the Ising model. But, unlike in Toom’s rule, if you now add an external field, which corresponds to a dynamics where there is now a greater probability of flipping the cell values to a particular value (p not equal to q above), then the Ising model no longer has two stable phases.
In fact there is a general argument that if you look at a phase diagram as a function of a bunch of parameters (say temperature and applied magnetic field strength in this case), then the places where two stable regimes can coexist has to be a surface with one less dimension than your parameter space.  This is known as Gibbs’ phase rule.  Toom’s rule violates this.  It’s an example of a nonequilibrium system.
So here is what is puzzling me.  Consider a three dimensional cubic lattice with +1,-1 spins on its vertices. Define an energy function that is a sum over terms that act on the spins on locations (i,j,k), (i+1,j,k), (i,j+1,k), (i,j,k+1) such that E = 0 if the spin at (i,j,k+1) is in the correct state for Toom’s rule applied to spins (i,j,k), (i+1,j,k), and (i,j+1,k) and is J otherwise.  In other words the terms enforce that the ground state locally obey’s Toom’s rule, if we imagine rolling out Toom’s rule into the time dimension (here the z direction). At zero temperature, the ground state of this system will be two-fold degenerate corresponding to the all +1 and all -1 state.  At finite temperature this model well behave as a symmetric noise Toom’s rule model (see here for why.)  So even at finite temperature this will preserve information, like the d>2 Ising model and Toom’s CA rule.
But since this behaves like Toom’s rule, it seems to me that if you add an external field, then this system is in a bit of paradox.  On the one hand, we know from Gibb’s phase rule, that this should not be able to exhibit two stable phases over a range of external fields.  On the other hand, this thing is just Toom’s rule, laid out spatially.  So it would seem that one could apply the arguments about why Toom’s rule is robust at finite field.  But these contradict each other.  So which is it?
 

4 Pages

Walk up to a physicist at a party (we could add a conditional about the amount of beer consumed by the physicist at this point, but that would be redundant, it is a party after all), and say to him or her “4 pages.”  I’ll bet you that 99 percent of the time the physicist’s immediate response will be the three words “Physical Review Letters.”  PRL, a journal of the American Physical Society, is one of the top journals to publish in as a physicist, signaling to the mating masses whether you are OK and qualified to be hired as faculty at (insert your college name here).  I jest!  (As an aside, am I the only one who reads what APS stands for and wonders why I have to see the doctor to try out for high school tennis?)  In my past life, before I passed away as Pontiff, I was quite proud of the PRLs I’d been lucky enough to have helped with, including one that has some cool integrals, and another that welcomes my niece into the world.
Wait, wht?!?  Yes, in “Coherence-Preserving Quantum Bits” the acknowledgement include a reference to my brother’s newborn daughter.  Certainly I know of no other paper where such acknowledgements to a beloved family member is given.  The other interesting bit about that paper is that we (okay probably you can mostly blame me) originally entitled it “Supercoherent Quantum Bits.”  PRL, however, has a policy about new words coined by authors, and, while we almost made it to the end without the referee or editor noticing, they made us change the title because “Supercoherent Quantum Bits” would be a new word.  Who would have thought that being a PRL editor meant you had to be a defender of the lexicon?  (Good thing Ben didn’t include qubits in his title.)
Which brings me to the subject of this post.  This is a cool paper.  It shows that a very nice quantum error correcting code due to Bravyi and Haah admits a transversal (all at once now, comrades!) controlled-controlled-phase gate, and that this, combined with another transversal gate (everyone’s fav the Hadamard) and fault-tolerant quantum error correction is universal for quantum computation.  This shows a way to not have to use state distillation for quantum error correction to perform fault-tolerant quantum computing, which is exciting for those of us who hope to push the quantum computing threshold through the roof with resources available to even a third world quantum computing company.
What does this have to do with PRL?  Well this paper has four pages.  I don’t know if it is going to be submitted or has already been accepted at PRL, but it has that marker that sets off my PRL radar, bing bing bing!  And now here is an interesting thing I found in this paper.  The awesome amazing very cool code in this paper  is defined via its stabilizer

I I I I I I IXXXXXXXX; I I I I I I I ZZZZZZZZ,
I I IXXXXI I I IXXXX; I I I ZZZZ I I I I ZZZZ,
IXXI IXXI IXXI IXX; I ZZ I I ZZ I I ZZ I I ZZ,
XIXIXIXIXIXIXIX; Z I Z I Z I Z I Z I Z I Z I Z,

This takes up a whopping 4 lines of the article.  Whereas the disclaimer, in the acknowledgements reads

The U.S. Government is authorized to
reproduce and distribute reprints for Governmental pur-
poses notwithstanding any copyright annotation thereon.
Disclaimer: The views and conclusions contained herein
are those of the authors and should not be interpreted
as necessarily representing the official policies or endorse-
ments, either expressed or implied, of IARPA, DoI/NBC,
or the U.S. Government.

Now I’m not some come-of-age tea party enthusiast who yells at the government like a coyote howls at the moon (I went to Berkeley damnit, as did my parents before me.)  But really, have we come to a point where the god-damn disclaimer on an important paper is longer than the actual definition of the code that makes the paper so amazing?
Before I became a ghost pontiff, I had to raise money from many different three, four, and five letter agencies.  I’ve got nothing but respect for the people who worked the jobs that help supply funding for large research areas like quantum computing.  In fact I personally think we probably need even more people to execute on the civic duty of getting funding to the most interesting and most trans-form-ative long and short term research projects. But really?  A disclaimer longer than the code which the paper is about?  Disclaiming, what exactly?  Erghhh.

Non-chaotic irregularity

In principle, barring the intervention of chance, identical causes lead to identical effects.  And except in chaotic systems, similar causes lead to similar effects.  Borges’ story “Pierre Menard” exemplifies an extreme version of this idea: an early 20’th century writer studies Cervantes’ life and times so thoroughly that he is able to recreate several chapters of “Don Quixote” without mistakes and without consulting the original.
Meanwhile, back at the ShopRite parking lot in Croton on Hudson, NY,  they’d installed half a dozen identical red and white parking signs, presumably all from the same print run, and all posted in similar environments, except for two in a sunnier location.

The irregular patterns of cracks that formed as the signs weathered were so similar that at first I thought the cracks had also been printed, but then I noticed small differences. The sharp corners on letters like S and E,  apparently points of high stress, usually triggered near-identical cracks in each sign, but not always, and in the sunnier signs many additional fine cracks formed. 
Another example of reproducibly irregular dynamics was provided over 30 years ago by Ahlers and Walden’s experiments on convective turbulence, where a container of normal liquid helium, heated from below, exhibited nearly the same sequence of temperature fluctuations in several runs of the experiment.