Happy New Year! To celebrate let’s talk about error correcting codes and….aliens.
The universe, as many have noted, is kind of like a computer. Or at least our best description of the universe is given by equations that prescribe how various quantities change in time, a lot like a computer program describes how data in a computer changes in time. Of course, this ignores one of the fundamental differences between our universe and our computers: our computers tend to be able to persist information over long periods of time. In contrast, the quantities describing our universe tend to have a hard time being recoverable after even a short amount of time: the location (wave function) of an atom, unless carefully controlled, is impacted by an environment that quickly makes it impossible to recover the initial bits (qubits) of the location of the atom.
Computers, then, are special objects in our universe, ones that persist and allow manipulation of long lived bits of information. A lot like life. The bits describing me, the structural stuff of my bones, skin, and muscles, the more concretely information theoretic bits of my grumbly personality and memories, the DNA describing how to build a new version of me, are all pieces of information that persist over what we call a lifetime, despite the constant gnaw of second law armed foes that would transform me into unliving goo. Maintaining my bits in the face of phase transitions, viruses, bowel obstructions, cardiac arrest, car accidents, and bears is what human life is all about, and we increasingly do it well, with life expectancy now approaching 80 years in many parts of the world.
But 80 years is not that long. Our universe is 13.8ish billion years old, or about 170ish million current lucky human’s life expectancies. Most of us would, all things equal, like to live even longer. We’re not big fans of death. So what obstacles are there toward life extension? Yadda yadda biology squishy stuff, yes. Not qualified to go there so I won’t. But since life is like a computer in regards to maintaining information, we can look toward our understanding of what allows computers to preserve information…and extrapolate!
Enter error correction. If bits are subject to processes that flip the values of the bits, then you’ll lose information. If, however, we give up storing information in each individual bit and instead store single bits across multiple individual noisy bits, we can make this spread out bit live much longer. Instead of saying 0, and watching it decay to unknown value, say 000…00, 0 many times, and ask if the majority of these values remain 0. Viola you’ve got an error correcting code. Your smeared out information will be preserved longer, but, and here is the important point, at the cost of using more bits.
Formalizing this a bit, there are a class of beautiful theorems, due originally to von Neumann, classically, and a host of others, quantumly, called the threshold theorems for fault-tolerant computing which tell you, given an model for how errors occur, how fast they occur, and how fast you can compute, whether you can reliably compute. Roughly these theorems all say something like: if your error rate is below some threshold, then if you want to compute while failing at a particular better rate, then you can do this using a complicated larger construction that is larger proportional to a polynomial in the log of inverse of the error rate you wish to achieve. What I’d like to pull out of these theorems for talking about life are two things: 1) there is an overhead to reliably compute, this overhead is both larger, in size, and takes longer, in time, to compute and 2) the scaling of this overhead depends crucially on the error model assumed.
Which leads back to life. If it is a crucial part of life to preserve information, to keep your bits moving down the timeline, then it seems that the threshold theorems will have something, ultimately, to say about extending your lifespan. What are the error models and what are the fundamental error rates of current human life? Is there a threshold theorem for life? I’m not sure we understand biology well enough to pin this down yet, but I do believe we can use the above discussion to extrapolate about our future evolution.
Or, because witnessing evolution of humans out of their present state seemingly requires waiting a really long time, or technology we currently don’t have, let’s apply this to…aliens. 13.8 billion years is a long time. It now looks like there are lots of planets. If intelligent life arose on those planets billions of years ago, then it is likely that it has also had billions of years to evolve past the level of intelligence that infects our current human era. Which is to say it seems like any such hypothetical aliens would have had time to push the boundaries of the threshold theorem for life. They would have manipulated and technologically engineered themselves into beings that live for a long period of time. They would have applied the constructions of the threshold theorem for life to themselves, lengthening their life by apply principles of fault-tolerant computing.
As we’ve witnessed over the last century, intelligent life seems to hit a point in its life where rapid technological change occurs. Supposing that the period of time in which life spends going from reproducing, not intelligent stuff, to megalords of technological magic in which the life can modify itself and apply the constructions of the threshold theorem for life, is fast, then it seems that most life will be found at the two ends of the spectrum, unthinking goo, or creatures who have climbed the threshold theorem for life to extend their lifespans to extremely long lifetimes. Which lets us think about what alien intelligent life would look like: it will be pushing the boundaries of using the threshold theorem for life.
Which lets us make predictions about what this advanced alien life would look life. First, and probably most importantly, it would be slow. We know that our own biology operates at an error rate that ends up being about 80 years. If we want to extend this further, then taking our guidance from the threshold theorems of computation, we will have to use larger constructions and slower constructions in order to extend this lifetime. And, another important point, we have to do this for each new error model which comes to dominate our death rate. That is, today, cardiac arrest kills the highest percentage of people. This is one error model, so to speak. Once you’ve conquered it, you can go down the line, thinking about error models like earthquakes, falls off cliffs, etc. So, likely, if you want to live a long time, you won’t just be slightly slow compared to our current biological life, but instead extremely slow. And large.
And now you can see my resolution to the Fermi paradox. The Fermi paradox is a fancy way of saying “where are the (intelligent) aliens?” Perhaps we have not found intelligent life because the natural fixed point of intelligent evolution is to produce entities for which our 80 year lifespans is not even a fraction of one of their basic clock cycles. Perhaps we don’t see aliens because, unless you catch life in the short transition from unthinking goo to masters of the universe, the aliens are just operating on too slow a timescale. To discover aliens, we must correlate observations over a long timespan, something we have not yet had the tools and time to do. Even more interesting the threshold theorems also have you spread your information out across a large number of individually erring sub-systems. So not only do you have to look at longer time scales, you also need to make correlated observations over larger and larger systems. Individual bits in error correcting codes look as noisy as before, it is only in the aggregate that information is preserved over longer timespans. So not only do we have too look slower, we need to do so over larger chunks of space. We don’t see aliens, dear Fermi, because we are young and impatient.
And about those error models. Our medical technology is valiantly tackling a long list of human concerns. But those advanced aliens, what kind of error models are they most concerned with? Might I suggest that among those error models, on the long list of things that might not have been fixed by their current setup, the things that end up limiting their error rate, might not we be on that list? So quick, up the ladder of threshold theorems for life, before we end up an error model in some more advanced creatures slow intelligent mind!
Portrait of an Academic at a Midlife Crisis
Citations are the currency of academia. But the currency of your heart is another thing altogether. With apologies to my co-authors, here is a plot of my paper citations versus my own subjective rating of the paper. Hover over the circles to see the paper title, citations per year, and score. Click to see the actual paper. (I’ve only included papers that appear on the arxiv.)
If I were an economist I suppose at this point I would fit a sloping line through the data and claim victory. But being a lowly software developer, its more interesting to me to give a more anecdotal treatment of the data.
- The paper that I love the most has, as of today, exactly zero citations! Why do I love that paper? Not because it’s practical (far from it.) Not because it proves things to an absolute T (just ask the beloved referees of that paper.) But I love it because it says there is the possibility that there exists a material that quantum computes in a most peculiar manner. In particular the paper argues that it is possible to have devices where: quantum information starts on one side of device, you turn on a field over the device, and “bam!” the quantum information is now on the other side of the material with a quantum circuit applied to it. How f’in cool is that! I think its just wonderful, and had I stuck around the hallowed halls, I probably would still be yelling about how cool it is, much to the dismay of my friends and colleagues (especially those for which the use of the word adiabatic causes their brain to go spazo!)
- Three papers I was lucky enough to be involved in as a graduate student, wherein we showed how exchange interactions alone could quantum compute, have generated lots of citations. But the citations correlate poorly with my score. Why? Because it’s my score! Haha! In particular the paper I love the most out of this series is not the best written, most deep, practical, or highly cited. It’s the paper where we first showed that exchange alone was universal for quantum computing. The construction in the paper has large warts on it, but it was the paper where I think I first participated in a process where I felt like we knew something about the possibilities of building a quantum computer that others had not quite exactly thought of. And that feeling is wonderful and is why that paper has such a high subjective score.
- It’s hard not to look this decades worth of theory papers and be dismayed about how far they are from real implementation. I think that is why I like Coherence-preserving quantum bit and Adiabatic Quantum Teleportation. Both of these are super simple and always felt like if I could just get an experimentalist
drunk enoughexcited enough they might try implement that damn thing. The first shows a way to make a qubit that should be more robust to errors because its ground state is in an error detecting state. The second shows a way to get quantum information to move between three qubits using a simple adiabatic procedure related to quantum teleportation. I still hope someday to see these executed on a functioning quantum computer, and I wonder how I’d feel about them should that happen.
When I Was Young, I Thought It Would Be Different….
When I was in graduate school (back before the earth cooled) I remember thinking the following thoughts:
- Quantum computing is a new field filled with two types of people: young people dumb enough to not know they weren’t supposed to be studying quantum computing, and old, tenured people who understood that tenure meant that they could work on what interested them, even when their colleagues thought they were crazy.
- Younger people are less likely to have overt biases against woman. By this kind of bias I mean that like the math professor at Caltech who told one of my friends that woman were bad at spatial reasoning (a.k.a. Jerks). Maybe these youngsters even had less hidden bias?
- Maybe, then, because the field was new, quantum computing would be a discipline in which the proportion of woman was higher than the typical rates of their parent disciplines, physics and in computer science?
In retrospect, like most of the things I have thought in my life, this line of reasoning was naive.
Reading Why Are There So Few Women In Science in the New York Times reminded me about these thoughts of my halcyon youth, and made me dig through the last few QIP conferences to get one snapshot (note that I just say one, internet comment troll) of the state of woman in the quantum computing (theory) world:
Year | Speakers | Woman Speakers | Percent |
---|---|---|---|
2013 | 41 | 1 | 2.4 |
2012 | 43 | 2 | 4.7 |
2011 | 40 | 3 | 7.5 |
2010 | 39 | 4 | 10.2 |
2009 | 40 | 1 | 2.5 |
Personally, it’s hard to read these numbers and not feel a little disheartened.
Why I Left Academia
TLDR: scroll here for the pretty interactive picture.
Over two years ago I abandoned my post at the University of Washington as a assistant research professor studying quantum computing and started a new career as a software developer for Google. Back when I was a denizen of the ivory tower I used to daydream that when I left academia I would write a long “Jerry Maguire”-esque piece about the sordid state of the academic world, of my lot in that world, and how unfair and f**ked up it all is. But maybe with less Tom Cruise. You know the text, the standard rebellious view of all young rebels stuck in the machine (without any mirror.) The song “Mad World” has a lyric that I always thought summed up what I thought it would feel like to leave and write such a blog post: “The dreams in which I’m dying are the best I’ve ever had.”
But I never wrote that post. Partially this was because every time I thought about it, the content of that post seemed so run-of-the-mill boring that I feared my friends who read it would never ever come visit me again after they read it. The story of why I left really is not that exciting. Partially because writing a post about why “you left” is about as “you”-centric as you can get, and yes I realize I have a problem with ego-centric ramblings. Partially because I have been busy learning a new career and writing a lot (omg a lot) of code. Partially also because the notion of “why” is one I—as a card carrying ex-Physicist—cherish and I knew that I could not possibly do justice to giving a decent “why” explanation.
Indeed: what would a “why” explanation for a life decision such as the one I faced look like? For many years when I would think about this I would simply think “well it’s complicated and how can I ever?” There are, of course, the many different components that you think about when considering such decisions. But then what do you do with them? Does it make sense to think about them as probabilities? “I chose to go to Caltech, 50 percent because I liked physics, and 50 percent because it produced a lot Nobel prize winners.” That does not seem very satisfying.
Maybe the way to do it is to phrase the decisions in terms of probabilities that I was assigning before making the decision. “The probability that I’ll be able to contribute something to physics will be 20 percent if I go to Caltech versus 10 percent if I go to MIT.” But despite what some economists would like to believe there ain’t no way I ever made most decisions via explicit calculation of my subjective odds. Thinking about decisions in terms of what an actor feels each decision would do to increase his/her chances of success feels better than just blindly associating probabilities to components in a decision, but it also seems like a lie, attributing math where something else is at play.
So what would a good description of the model be? After pondering this for a while I realized I was an idiot (for about the eighth time that day. It was a good day.) The best way to describe how my brain was working is, of course, nothing short than my brain itself. So here, for your amusement, is my brain (sorry, only tested using Chrome). Yes, it is interactive.
A Paradox of Toom's Rule?
Science is slow. You can do things like continue a conversation with yourself (and a few commenters) that started in 2005. Which is what I’m now going to do 🙂 The below is probably a trivial observation for one of the cardinals, but I find it kind of interesting.
Let’s begin by recalling the setup. Toom’s rule is a cellular automata rule for a two dimensional cellular automata on a square grid. Put +1 and -1’s on the vertices of a square grid, and then use the following update rule at each step: “Update the value with the majority vote of your own state, the state of your neighbor to the north, and the state of your neighbor to the east.” A few steps of the rule are shown here with +1 as white and -1 as blue:
As you can see Toom’s rule “shrinks” islands of “different” states (taking away such different cells from the north and east sides of such an island.) It is this property which gives Toom’s rule some cool properties in the presence of noise.
So now consider Toom’s rule, but with noise. Replace Toom’s update rule with the rule followed by, for each and every cell a noise process. For example this noise could be to put the cell into state +1 with p percent probability and -1 with q percent probability. Suppose now you are trying to store information in the cellular automata. You start out at time zero, say, in the all +1 state. Then let Toom’s rule with noise run. If p=q and these values are below a threshold, then if you start in the +1 state you will remain in a state with majority +1 with a probability that goes to one exponentially as a function of the system size. Similarly if you start in -1. The cool thing about Toom’s rule is that this works not just for p=q, but also for some values of p not equal to q (See here for a picture of the phase diagram.) That is there are two stable states in this model, even for biased noise.
Contrast Toom’s rule with a two dimensional Ising model which is in the process of equilibriating to temperature T. If this model has no external field applied, then like Toom’s rule there is a phase where the mostly +1 and the mostly -1 states are stable and coexist. These are from zero temperature (no dynamics) to a threshold temperature T, the critical temperature of the Ising model. But, unlike in Toom’s rule, if you now add an external field, which corresponds to a dynamics where there is now a greater probability of flipping the cell values to a particular value (p not equal to q above), then the Ising model no longer has two stable phases.
In fact there is a general argument that if you look at a phase diagram as a function of a bunch of parameters (say temperature and applied magnetic field strength in this case), then the places where two stable regimes can coexist has to be a surface with one less dimension than your parameter space. This is known as Gibbs’ phase rule. Toom’s rule violates this. It’s an example of a nonequilibrium system.
So here is what is puzzling me. Consider a three dimensional cubic lattice with +1,-1 spins on its vertices. Define an energy function that is a sum over terms that act on the spins on locations (i,j,k), (i+1,j,k), (i,j+1,k), (i,j,k+1) such that E = 0 if the spin at (i,j,k+1) is in the correct state for Toom’s rule applied to spins (i,j,k), (i+1,j,k), and (i,j+1,k) and is J otherwise. In other words the terms enforce that the ground state locally obey’s Toom’s rule, if we imagine rolling out Toom’s rule into the time dimension (here the z direction). At zero temperature, the ground state of this system will be two-fold degenerate corresponding to the all +1 and all -1 state. At finite temperature this model well behave as a symmetric noise Toom’s rule model (see here for why.) So even at finite temperature this will preserve information, like the d>2 Ising model and Toom’s CA rule.
But since this behaves like Toom’s rule, it seems to me that if you add an external field, then this system is in a bit of paradox. On the one hand, we know from Gibb’s phase rule, that this should not be able to exhibit two stable phases over a range of external fields. On the other hand, this thing is just Toom’s rule, laid out spatially. So it would seem that one could apply the arguments about why Toom’s rule is robust at finite field. But these contradict each other. So which is it?
4 Pages
Walk up to a physicist at a party (we could add a conditional about the amount of beer consumed by the physicist at this point, but that would be redundant, it is a party after all), and say to him or her “4 pages.” I’ll bet you that 99 percent of the time the physicist’s immediate response will be the three words “Physical Review Letters.” PRL, a journal of the American Physical Society, is one of the top journals to publish in as a physicist, signaling to the mating masses whether you are OK and qualified to be hired as faculty at (insert your college name here). I jest! (As an aside, am I the only one who reads what APS stands for and wonders why I have to see the doctor to try out for high school tennis?) In my past life, before I passed away as Pontiff, I was quite proud of the PRLs I’d been lucky enough to have helped with, including one that has some cool integrals, and another that welcomes my niece into the world.
Wait, wht?!? Yes, in “Coherence-Preserving Quantum Bits” the acknowledgement include a reference to my brother’s newborn daughter. Certainly I know of no other paper where such acknowledgements to a beloved family member is given. The other interesting bit about that paper is that we (okay probably you can mostly blame me) originally entitled it “Supercoherent Quantum Bits.” PRL, however, has a policy about new words coined by authors, and, while we almost made it to the end without the referee or editor noticing, they made us change the title because “Supercoherent Quantum Bits” would be a new word. Who would have thought that being a PRL editor meant you had to be a defender of the lexicon? (Good thing Ben didn’t include qubits in his title.)
Which brings me to the subject of this post. This is a cool paper. It shows that a very nice quantum error correcting code due to Bravyi and Haah admits a transversal (all at once now, comrades!) controlled-controlled-phase gate, and that this, combined with another transversal gate (everyone’s fav the Hadamard) and fault-tolerant quantum error correction is universal for quantum computation. This shows a way to not have to use state distillation for quantum error correction to perform fault-tolerant quantum computing, which is exciting for those of us who hope to push the quantum computing threshold through the roof with resources available to even a third world quantum computing company.
What does this have to do with PRL? Well this paper has four pages. I don’t know if it is going to be submitted or has already been accepted at PRL, but it has that marker that sets off my PRL radar, bing bing bing! And now here is an interesting thing I found in this paper. The awesome amazing very cool code in this paper is defined via its stabilizer
I I I I I I IXXXXXXXX; I I I I I I I ZZZZZZZZ,
I I IXXXXI I I IXXXX; I I I ZZZZ I I I I ZZZZ,
IXXI IXXI IXXI IXX; I ZZ I I ZZ I I ZZ I I ZZ,
XIXIXIXIXIXIXIX; Z I Z I Z I Z I Z I Z I Z I Z,
This takes up a whopping 4 lines of the article. Whereas the disclaimer, in the acknowledgements reads
The U.S. Government is authorized to
reproduce and distribute reprints for Governmental pur-
poses notwithstanding any copyright annotation thereon.
Disclaimer: The views and conclusions contained herein
are those of the authors and should not be interpreted
as necessarily representing the official policies or endorse-
ments, either expressed or implied, of IARPA, DoI/NBC,
or the U.S. Government.
Now I’m not some come-of-age tea party enthusiast who yells at the government like a coyote howls at the moon (I went to Berkeley damnit, as did my parents before me.) But really, have we come to a point where the god-damn disclaimer on an important paper is longer than the actual definition of the code that makes the paper so amazing?
Before I became a ghost pontiff, I had to raise money from many different three, four, and five letter agencies. I’ve got nothing but respect for the people who worked the jobs that help supply funding for large research areas like quantum computing. In fact I personally think we probably need even more people to execute on the civic duty of getting funding to the most interesting and most trans-form-ative long and short term research projects. But really? A disclaimer longer than the code which the paper is about? Disclaiming, what exactly? Erghhh.
The Quantum Interregnum Decoherence Continues
The Quantum Interregnum Continues
Quantum Interregnum!
The Vicar of Randomization hereby informs the readers of this blog that the Quantum Pontiff Dave Bacon XLII has decohered. He further informs the readers of this blog that the entire Quantum Pontiff blog is in Justitium and hopes that the readers will refrain from acting like Canucks fans after losing the Stanley Cup.
The College of Cardinals will begin conclave in the coming days, please stay tuned to this chimney.