A Breakthrough Donation for Computer Science

Lance Fortnow has a post summarizing some of the news affecting the CS community over the past month, including updates on various prizes as well as the significant media attention focusing on physics- and math-related topics such as movies about Turing and Hawking as well as Terrence Tao on the Colbert Report.

From his post, I just learned that former Microsoft chief executive Steven Ballmer is making a donation to Harvard that will endow twelve—that’s right, 12—new tenured and tenure-track faculty positions in computer science. This is fantastic news and will have a huge positive impact on Harvard CS.

One thing missing from Lance’s list was news about the Breakthrough Prizes in mathematics and fundamental physics. In case you’ve been living under a rock, these prizes give a very hefty US $3 million purse to the chosen recipients. The winners are all luminaries in their field, and it’s great to see them get recognition for their outstanding work.

On the other hand, juxtaposing Ballmer’s donation and the Breakthrough Prizes couldn’t offer a starker contrast. It costs the same amount—$3 million—to endow a university full professor with appointments in more than one discipline at Duke University. My initial googling would suggest that this is a pretty typical figure at top-tier institutions.

What if, instead of a offering a cash prize to the Breakthrough Prize winners, the reward was an upgrade to an endowed chair at the current institution subject to the condition that the existing position would go to a new tenured or tenure-track hire in the same field? This seems to be a much better investment in science overall because it will help build a community of researchers around the prize winner, and the marginal benefit to this community from associating with the prize winner is likely far greater than any extra incentive the researchers might get within the current system to simply strive to win $3M cash.

Error correcting aliens

Happy New Year!  To celebrate let’s talk about error correcting codes and….aliens.
The universe, as many have noted, is kind of like a computer.  Or at least our best description of the universe is given by equations that prescribe how various quantities change in time, a lot like a computer program describes how data in a computer changes in time.  Of course, this ignores one of the fundamental differences between our universe and our computers: our computers tend to be able to persist information over long periods of time.  In contrast, the quantities describing our universe tend to have a hard time being recoverable after even a short amount of time: the location (wave function) of an atom, unless carefully controlled, is impacted by an environment that quickly makes it impossible to recover the initial bits (qubits) of the location of the atom. 
Computers, then, are special objects in our universe, ones that persist and allow manipulation of long lived bits of information.  A lot like life.  The bits describing me, the structural stuff of my bones, skin, and muscles, the more concretely information theoretic bits of my grumbly personality and memories, the DNA describing how to build a new version of me, are all pieces of information that persist over what we call a lifetime, despite the constant gnaw of second law armed foes that would transform me into unliving goo.  Maintaining my bits in the face of phase transitions, viruses, bowel obstructions, cardiac arrest, car accidents, and bears is what human life is all about, and we increasingly do it well, with life expectancy now approaching 80 years in many parts of the world.
But 80 years is not that long.  Our universe is 13.8ish billion years old, or about 170ish million current lucky human’s life expectancies.  Most of us would, all things equal, like to live even longer.  We’re not big fans of death.  So what obstacles are there toward life extension?  Yadda yadda biology squishy stuff, yes.  Not qualified to go there so I won’t.  But since life is like a computer in regards to maintaining information, we can look toward our understanding of what allows computers to preserve information…and extrapolate!
Enter error correction.  If bits are subject to processes that flip the values of the bits, then you’ll lose information.  If, however, we give up storing information in each individual bit and instead store single bits across multiple individual noisy bits, we can make this spread out bit live much longer.  Instead of saying 0, and watching it decay to unknown value, say 000…00, 0 many times, and ask if the majority of these values remain 0.  Viola you’ve got an error correcting code.  Your smeared out information will be preserved longer, but, and here is the important point, at the cost of using more bits.
Formalizing this a bit, there are a class of beautiful theorems, due originally to von Neumann, classically, and a host of others, quantumly, called the threshold theorems for fault-tolerant computing which tell you, given an model for how errors occur, how fast they occur, and how fast you can compute, whether you can reliably compute. Roughly these theorems all say something like: if your error rate is below some threshold, then if you want to compute while failing at a particular better rate, then you can do this using a complicated larger construction that is larger proportional to a polynomial in the log of inverse of the error rate you wish to achieve. What I’d like to pull out of these theorems for talking about life are two things: 1) there is an overhead to reliably compute, this overhead is both larger, in size, and takes longer, in time, to compute and 2) the scaling of this overhead depends crucially on the error model assumed.
Which leads back to life. If it is a crucial part of life to preserve information, to keep your bits moving down the timeline, then it seems that the threshold theorems will have something, ultimately, to say about extending your lifespan. What are the error models and what are the fundamental error rates of current human life? Is there a threshold theorem for life? I’m not sure we understand biology well enough to pin this down yet, but I do believe we can use the above discussion to extrapolate about our future evolution.
Or, because witnessing evolution of humans out of their present state seemingly requires waiting a really long time, or technology we currently don’t have, let’s apply this to…aliens. 13.8 billion years is a long time. It now looks like there are lots of planets. If intelligent life arose on those planets billions of years ago, then it is likely that it has also had billions of years to evolve past the level of intelligence that infects our current human era. Which is to say it seems like any such hypothetical aliens would have had time to push the boundaries of the threshold theorem for life. They would have manipulated and technologically engineered themselves into beings that live for a long period of time. They would have applied the constructions of the threshold theorem for life to themselves, lengthening their life by apply principles of fault-tolerant computing.
As we’ve witnessed over the last century, intelligent life seems to hit a point in its life where rapid technological change occurs. Supposing that the period of time in which life spends going from reproducing, not intelligent stuff, to megalords of technological magic in which the life can modify itself and apply the constructions of the threshold theorem for life, is fast, then it seems that most life will be found at the two ends of the spectrum, unthinking goo, or creatures who have climbed the threshold theorem for life to extend their lifespans to extremely long lifetimes. Which lets us think about what alien intelligent life would look like: it will be pushing the boundaries of using the threshold theorem for life.
Which lets us make predictions about what this advanced alien life would look life. First, and probably most importantly, it would be slow. We know that our own biology operates at an error rate that ends up being about 80 years. If we want to extend this further, then taking our guidance from the threshold theorems of computation, we will have to use larger constructions and slower constructions in order to extend this lifetime. And, another important point, we have to do this for each new error model which comes to dominate our death rate. That is, today, cardiac arrest kills the highest percentage of people. This is one error model, so to speak. Once you’ve conquered it, you can go down the line, thinking about error models like earthquakes, falls off cliffs, etc. So, likely, if you want to live a long time, you won’t just be slightly slow compared to our current biological life, but instead extremely slow. And large.
And now you can see my resolution to the Fermi paradox. The Fermi paradox is a fancy way of saying “where are the (intelligent) aliens?” Perhaps we have not found intelligent life because the natural fixed point of intelligent evolution is to produce entities for which our 80 year lifespans is not even a fraction of one of their basic clock cycles. Perhaps we don’t see aliens because, unless you catch life in the short transition from unthinking goo to masters of the universe, the aliens are just operating on too slow a timescale. To discover aliens, we must correlate observations over a long timespan, something we have not yet had the tools and time to do. Even more interesting the threshold theorems also have you spread your information out across a large number of individually erring sub-systems. So not only do you have to look at longer time scales, you also need to make correlated observations over larger and larger systems. Individual bits in error correcting codes look as noisy as before, it is only in the aggregate that information is preserved over longer timespans. So not only do we have too look slower, we need to do so over larger chunks of space. We don’t see aliens, dear Fermi, because we are young and impatient.
And about those error models. Our medical technology is valiantly tackling a long list of human concerns. But those advanced aliens, what kind of error models are they most concerned with? Might I suggest that among those error models, on the long list of things that might not have been fixed by their current setup, the things that end up limiting their error rate, might not we be on that list? So quick, up the ladder of threshold theorems for life, before we end up an error model in some more advanced creatures slow intelligent mind!

Simple circuit "factors" arbitrarily large numbers

Last Thursday, at the QIP rump session in Beijing, John Smolin described recent work with Graeme Smith and Alex Vargo [SSV] showing that arbitrarily large numbers $latex N$ can be factored by using this constant-sized quantum circuit

to implement a compiled version of Shor’s algorithm.  The key to SSV’s breathtaking improvement is to choose a base for exponentiation, $latex a$, such that the function $latex a^x bmod N$ is periodic with period 2.  (This kind of  simplification—using a base with a short period such as 2, 3, or 4—has in fact been used in all experimental demonstrations of Shor’s algorithm that we know of).  SSV  go on to show that an $latex a$ with period 2 exists for every product of distinct primes $latex N=pq$, and therefore that the circuit above can be used to factor any such number, however large.  The problem, of course, is that in order to find a 2-periodic base $latex a$, one needs to know the factorization of $latex N$. After pointing this out, and pedantically complaining that any process requiring the answer to be known in advance ought not to be called compilation, the authors forge boldly on and note that their circuit can be simplified even further to a classical fair coin toss, giving a successful factorization whenever it is iterated sufficiently many times to obtain both a Head and a Tail among the outcomes (like having enough kids to have both a girl and a boy).   Using a penny and two different US quarters, they successfully factor 15, RSA-768, and a 20,000-bit number of their own invention by this method, and announce plans for implementing the full circuit above on state-of-the art superconducting hardware.  When I asked the authors of SSV what led them into this line of research, they said they noticed that the number of qubits used to do Shor demonstrations has been decreasing over time, even as the number being factored increased from 15 to 21, and they wanted to understand why.  Alas news travels faster than understanding—there have already been inquiries as to whether SSV might provide a practical way of factoring without the difficulty and expense of building a large-scale quantum computer.

What increases when a self-organizing system organizes itself? Logical depth to the rescue.

(An earlier version of this post appeared in the latest newsletter of the American Physical Society’s special interest group on Quantum Information.)
One of the most grandly pessimistic ideas from the 19th century is that of  “heat death” according to which a closed system, or one coupled to a single heat bath at thermal  equilibrium,  eventually inevitably settles into an uninteresting state devoid of life or macroscopic motion.  Conversely, in an idea dating back to Darwin and Spencer, nonequilibrium boundary conditions are thought to have caused or allowed the biosphere to self-organize over geological time.  Such godless creation, the bright flip side of the godless hell of heat death, nowadays seems to worry creationists more than Darwin’s initially more inflammatory idea that people are descended from apes. They have fought back, using superficially scientific arguments, in their masterful peanut butter video.
Self-organization versus heat death
Much simpler kinds of complexity generation occur in toy models with well-defined dynamics, such as this one-dimensional reversible cellular automaton.  Started from a simple initial condition at the left edge (periodic, but with a symmetry-breaking defect) it generates a deterministic wake-like history of growing size and complexity.  (The automaton obeys a second order transition rule, with a site’s future differing from its past iff exactly two of its first and second neighbors in the current time slice, not counting the site itself, are black and the other two are white.)

Fig 2.

Time →
But just what is it that increases when a self-organizing system organizes itself?
Such organized complexity is not a thermodynamic potential like entropy or free energy.  To see this, consider transitions between a flask of sterile nutrient solution and the bacterial culture it would become if inoculated by a single seed bacterium.  Without the seed bacterium, the transition from sterile nutrient to bacterial culture is allowed by the Second Law, but prohibited by a putative “slow growth law”, which prohibits organized complexity from increasing quickly, except with low probability.
Fig. 3

The same example shows that organized complexity is not an extensive quantity like free energy.  The free energy of a flask of sterile nutrient would be little altered by adding a single seed bacterium, but its organized complexity must have been greatly increased by this small change.  The subsequent growth of many bacteria is accompanied by a macroscopic drop in free energy, but little change in organized complexity.
The relation between universal computer programs and their outputs has long been viewed as a formal analog of the relation between theory and phenomenology in science, with the various programs generating a particular output x being analogous to alternative explanations of the phenomenon x.  This analogy draws its authority from the ability of universal computers to execute all formal deductive processes and their presumed ability to simulate all processes of physical causation.
In algorithmic information theory the Kolmogorov complexity, of a bit string x as defined as the size in bits of its minimal program x*, the smallest (and lexicographically first, in case of ties) program causing a standard universal computer U to produce exactly x as output and then halt.
 x* = min{p: U(p)=x}
Because of the ability of universal machines to simulate one another, a string’s Kolmogorov complexity is machine-independent up to a machine-dependent additive constant, and similarly is equal to within an additive constant to the string’s algorithmic entropy HU(x), the negative log of the probability that U would output exactly x and halt if its program were supplied by coin tossing.    Bit strings whose minimal programs are no smaller than the string itself are called incompressible, or algorithmically random, because they lack internal structure or correlations that would allow them to be specified more concisely than by a verbatim listing. Minimal programs themselves are incompressible to within O(1), since otherwise their minimality would be undercut by a still shorter program.  By contrast to minimal programs, any program p that is significantly compressible is intrinsically implausible as an explanation for its output, because it contains internal redundancy that could be removed by deriving it from the more economical hypothesis p*.  In terms of Occam’s razor, a program that is compressible by s bits deprecated as an explanation of its output because it suffers from  s bits worth of ad-hoc assumptions.
Though closely related[1] to statistical entropy, Kolmogorov complexity itself is not a good measure of organized complexity because it assigns high complexity to typical random strings generated by coin tossing, which intuitively are trivial and unorganized.  Accordingly many authors have considered modified versions of Kolmogorov complexity—also measured in entropic units like bits—hoping thereby to quantify the nontrivial part of a string’s information content, as opposed to its mere randomness.  A recent example is Scott Aaronson’s notion of complextropy, defined roughly as the number of bits in the smallest program for a universal computer to efficiently generate a probability distribution relative to which x  cannot efficiently be recognized as atypical.
However, I believe that entropic measures of complexity are generally unsatisfactory for formalizing the kind of complexity found in intuitively complex objects found in nature or gradually produced from simple initial conditions by simple dynamical processes, and that a better approach is to characterize an object’s complexity by the amount of number-crunching (i.e. computation time, measured in machine cycles, or more generally other dynamic computational resources such as time, memory, and parallelism) required to produce the object from a near-minimal-sized description.
This approach, which I have called  logical depth, is motivated by a common feature of intuitively organized objects found in nature: the internal evidence they contain of a nontrivial causal history.  If one accepts that an object’s minimal program represents its most plausible explanation, then the minimal program’s run time represents the number of steps in its most plausible history.  To make depth stable with respect to small variations in x or U, it is necessary also to consider programs other than the minimal one, appropriately weighted according to their compressibility, resulting in the following two-parameter definition.

  • An object  is called  d-deep with  s  bits significance iff every program for U to compute x in time <d is compressible by at least s bits. This formalizes the idea that every hypothesis for  x  to have originated more quickly than in time  d  contains  s bits worth of ad-hoc assumptions.

Dynamic and static resources, in the form of the parameters  d  and  s,  play complementary roles in this definition:  d  as the quantifier and  s  as the certifier of the object’s nontriviality.  Invoking the two parameters in this way not only stabilizes depth   with respect to small variations of  x and U, but also makes it possible to prove that depth obeys a slow growth law, without which any mathematically definition of organized complexity would seem problematic.

  • A fast deterministic process cannot convert shallow objects to deep ones, and a fast stochastic process can only do so with low probability.  (For details see Bennett88.)

 
Logical depth addresses many infelicities and problems associated with entropic measures of complexity.

  • It does not impose an arbitrary rate of exchange between the independent variables of strength of evidence and degree of nontriviality of what the evidence points to, nor an arbitrary maximum complexity that an object can have, relative to its size.  Just as a microscopic fossil can validate an arbitrarily long evolutionary process, so a small fragment of a large system, one that has evolved over a long time to a deep state, can contain evidence of entire depth of the large system, which may be more than exponential in the size of the fragment.
  • It helps explain the increase of complexity at early times and its decrease at late times by providing different mechanisms for these processes.  In figure 2, for example, depth increases steadily at first because it reflects the duration of the system’s actual history so far.  At late times, when the cellular automaton has run for a generic time comparable to its Poincare recurrence time, the state becomes shallow again, not because the actual history was uneventful, but because evidence of that history has become degraded to the point of statistical insignificance, allowing the final state to be generated quickly from a near-incompressible program that short-circuits the system’s actual history.
  • It helps explain while some systems, despite being far from thermal equilibrium, never self-organize.  For example in figure 1, the gaseous sun, unlike the solid earth, appears to lack means of remembering many details about its distant past.  Thus while it contains evidence of its age (e.g. in its hydrogen/helium ratio) almost all evidence of particular details of its past, e.g. the locations of sunspots, are probably obliterated fairly quickly by the sun’s hot, turbulent dynamics.  On the other hand, systems with less disruptive dynamics, like our earth, could continue increasing in depth for as long as their nonequilibrium boundary conditions persisted, up to an exponential maximum imposed by Poincare recurrence.
  • Finally, depth is robust with respect to transformations that greatly alter an object’s size and Kolmogorov complexity, and many other entropic quantities, provided the transformation leaves intact significant evidence of a nontrivial history. Even a small sample of the biosphere, such as a single DNA molecule, contains such evidence.  Mathematically speaking, the depth of a string x is not much altered by replicating it (like the bacteria in the flask), padding it with zeros or random digits, or passing it though a noisy channel (although the latter treatment decreases the significance parameter s).  If the whole history of the earth were derandomized, by substituting deterministic pseudorandom choices for all its stochastic accidents, the complex objects in this substitute world would have very little Kolmogorov complexity, yet their depth would be about the same as if they had resulted from a stochastic evolution.

The remaining infelicities of logical depth as a complexity measure are those afflicting computational complexity and algorithmic entropy theories generally.

  • Lack of tight lower bounds: because of open P=PSPACE question one cannot exhibit a system that provably generates depth more than polynomial in the space used.
  • Semicomputability:  deep objects can be proved deep (with exponential effort) but shallow ones can’t be proved shallow.  The semicomputability of depth, like that of Kolmogorov complexity, is an unavoidable consequence of the unsolvability of the halting problem.

The following observations can be made partially mitigating these infelicities.

  • Using the theory of cryptographically strong pseudorandom functions one can argue (if such functions exist) that deep objects can be produced efficiently, in time polynomial and space polylogarithmic in their depth, and indeed that they are produced efficiently by some physical processes.
  • Semicomputability does not render a complexity measure entirely useless. Even though a particular string cannot be proved shallow, and requires an exponential amount of effort to prove it deep, the depth-producing properties of stochastic processes can be established, assuming the existence of cryptographically strong pseudorandom functions. This parallels the fact that while no particular string can be proved to be algorithmically random (incompressible), it can be proved that the statistically random process of coin tossing produces algorithmically random strings with high probability.

 
Granting that a logically deep object is one plausibly requiring a lot of computation to produce, one can consider various related notions:

  • An object  y  is deep relative to  x  if all near-minimal sized programs for computing  y  from  x  are slow-running.  Two shallow objects may be deep relative to one another, for example a random string and its XOR with a deep string.
  • An object can be called cryptic if it is computationally difficult to obtain a near- minimal sized program for the object from the object itself, in other words if any near-minimal sized program for x is deep relative to x.  One-way functions, if they exist, can be used to define cryptic objects; for example, in a computationally secure but information theoretically insecure cryptosystem, plaintexts should be cryptic relative to their ciphertexts.
  • An object can be called ambitious if, when presented to a universal computer as input, it causes the computer to embark on a long but terminating computation. Such objects, though they determine a long computation, do not contain evidence of it actually having been done.  Indeed they may be shallow and even algorithmically random.
  • An object can be called wise if it is deep and a large and interesting family of other deep objects are shallow relative to it. Efficient oracles for hard problems, such as the characteristic function of an NP-complete set, or the characteristic set K of the halting problem, are examples of wise objects.  Interestingly, Chaitin’s omega is an exponentially more compact oracle for the halting problem than K is, but it is so inefficient to use that it is shallow and indeed algorithmically random.

Further details about these notions can be found in Bennett88.  K.W. Regan in Dick Lipton’s blog discusses the logical depth of Bill Gasarch’s recently discovered solutions to the 17-17 and 18×18 four-coloring problem
I close with some comments on the relation between organized complexity and thermal disequilibrium, which since the 19th century has been viewed as an important, perhaps essential, prerequisite for self-organization.   Broadly speaking, locally interacting systems at thermal equilibrium obey the Gibbs phase rule, and its generalization in which the set of independent parameters is enlarged to include not only intensive variables like temperature, pressure and magnetic field, but also all parameters of the system’s Hamiltonian, such as local coupling constants.   A consequence of the Gibbs phase rule is that for generic values of the independent parameters, i.e. at a generic point in the system’s phase diagram, only one phase is thermodynamically stable.  This means that if a system’s independent parameters are set to generic values, and the system is allowed to come to equilibrium, its structure will be that of this unique stable Gibbs phase, with spatially uniform properties and typically short-range correlations.   Thus for generic parameter values, when a system is allowed to relax to thermal equilibrium, it entirely forgets its initial condition and history and exists in a state whose structure can be adequately approximated by stochastically sampling the distribution of microstates characteristic of that stable Gibbs phase.  Dissipative systems—those whose dynamics is not microscopically reversible or whose boundary conditions prevent them from ever attaining thermal equilibrium—are exempt from the Gibbs phase rule for reasons discussed in BG85, and so are capable, other conditions being favorable, of producing structures of unbounded depth and complexity in the long time limit. For further discussion and a comparison of logical depth with other proposed measures of organized complexity, see B90.
 


[1] An elementary result of algorithmic information theory is that for any probability ensemble of bit strings (representing e.g. physical microstates), the ensemble’s Shannon entropy differs from the expectation of its members’ algorithmic entropy by at most of the number of bits required to describe a good approximation to the ensemble.

 
 

Immanants

Recently computer scientist Leslie Valliant won the ACM’s Turing Award, considered one of the most prestigious prizes in computer science. Valliant is famous for many results, not the least of which are his results on the Permanent of a matrix. Over at the Godel’s Lost Letter, the iced tea man has a nice collection of interesting permanent related complexity facts. Recall that the permanent of a n by n matrix [latex]A[/latex] is given by

[latex]{rm per} A = sum_{pi in S_n} prod_{i=1}^n A_{i,pi(i)}[/latex]

where [latex]S_n[/latex] is the symmetric group on n elements and similarly the determinant of a n by n matrix [latex]A[/latex] is given by

[latex]{rm det} A = sum_{pi in S_n} prod_{i=1}^n (-1)^{{rm sgn} pi} A_{i,pi(i)}[/latex]

where [latex]{rm sgn} pi[/latex] is 0 if the permutation is made up of an even number of transpositions and 1 if the permutation is made up of an odd number of transpositions. One day I was sitting in my office when a physics undergraduate came by (a former ex-sailor from Alaska) and said…”Hey Dave, what if we replace the function in front of each term in the permanent and determinant by a character of a symmetric group irrep?” Which of course knocked me off my chair, first because what undergrad physics major knows about symmetric group irreps and second because I had never thought about this interesting twist on the permanent and determinant.
After a little google foo later, we quickly found the answer. For an n by n matrix [latex]A[/latex] the immanant of a matrix is given by

[latex]{rm imm_lambda A } = sum_{pi in S_n} prod_{i=1}^n chi_{lambda}(pi) A_{i,pi(i)}[/latex]

where [latex]lambda[/latex] labels the irrep of [latex]S_n[/latex] and [latex]chi_lambda(pi)[/latex] is the character of the irrep [latex]lambda[/latex] at group element [latex]pi[/latex]. Recall that the irreps of the symmetric group [latex]S_n[/latex] are labeled by partitions of [latex]n[/latex]. A partition of [latex]n[/latex] is a series of decreasing positive integers that sums to n, [latex](lambda_1m lambda_2, dots,lambda_r)[/latex] with [latex]lambda_1 geq lambda_2 geq dots geq lambda_r[/latex] such that [latex]sum_{i=1}^r lambda_i = n[/latex]. The partition corresponding to [latex](n)[/latex] corresponds to the trivial irrep in which [latex]chi_{(n)}(pi)=1[/latex], and on the opposite end of the spectrum, the partition corresponding to [latex](1,1,dots,1)[/latex] corresponds to the alternating irrep where [latex]chi_{(1,1,dots,1)}(pi)=(-1)^{{rm sgn} pi}[/latex]. Thus we see that the permanent and determinant are at the end of a spectrum of polynomials known as the immanants.
One of Valiant’s most well known results is that evaluating the permanent of a matrix with 0 and 1 as its possible matrix entries is #P complete, a.k.a really hard. On the other hand evaluating the determinant is not computationally difficult at all. At first this seems odd because a determinant has those minus signs which you would think would make it easier and not hard, but alas this is not so. So what about the computational complexity of the immanant? The Computational Complexity of Immanants by Peter Bürgisser (sorry I could only find a paywalled version) shows that there is a since in which certain immanants are also #P complete to evaluate. Rough the idea is that if one defines a class of immanants that have partitions that have a “step” in them that grows polynomially in [latex]n[/latex] (the step for the permanent is [latex]n[/latex]) then these will be #P complete to evaluate.
So what good are immanants? Well I’m sure mathematicians have some fine uses for them. One interesting thing to note is that immanantal polynomials of adjacency matrices of graphs give you graph invariants (for the determinant this is the same as saying that the eigenvalues of the adjacency matrix are a graph invariant.) However it is known that this, like the eigenvalues, is not a complete set of graph invariants and so is not a route towards efficiently solving graph isomorphism. So no real use there, but I’m sure an object so symmetric must be of some use, no?

US Quantum Computing Theory CS Hires?

I’m trying to put together a list of people who have been hired in the United States universities in CS departments who do theoretical quantum computing over the last decade. So the requirements I’m looking for are (a) hired into a tenure track position in a US university with at least fifty percent of their appointment in CS, (b) hired after 2001, and (c) they would say their main area of research is quantum information science theory.
Here is my current list:

  • Scott Aaronson (MIT)
  • P. Oscar Boykin (University of Florida, Computer Engineering)
  • Amit Chakrabarti (Dartmouth)
  • Vicky Choi (Virginia Tech)
  • Hang Dinh (Indiana University South Bend)
  • Sean Hallgren (Penn State)
  • Alexei Kitaev (Caltech)
  • Andreas Klappernecker (Texas A&M)
  • Igor Markov (Michigan)
  • Yaoyun Shi (Michigan)
  • Wim van Dam (UCSB)
  • Pawel Wocjan (UCF)

Apologies to anyone I’ve missed! So who have I missed? Please comment!
Update: Steve asks for a similar list in physics departments. Here is my first stab at such a list…though it’s a bit harder because the line between quantum computing theorist, and say, AMO theorist who studies systems that might be quantum computing is difficult.
Physicists, quantum computing theory,

  • Lorenza Viola (Dartmouth)
  • Stephen van Enk (Oregon)
  • Alexei Kitaev (Caltech)
  • Paolo Zanardi (USC)
  • Mark Byrd (Southern Illinois University)
  • Luming Duan (Michigan)
  • Kurt Jacobs (UMass Boston)
  • Peter Love (Haverford)
  • Jon Dowling (LSU)

I’m sure I missed a lot hear, please help me fill it in.

TQC 2011 and a Bonus! Rant!

So most of my conference announcements are going out on qspeak now (and will be digested every week in a post here.) But since I’m helping out with this one it, I thought I’d post a separate note here. TQC 2011 will be held in Madrid, Spain from May 24 to May 26. The important deadline is January 24, 2011 for submissions. Website here.
Note that TQC has a proceedings (for those who care about the politics of getting a job in a computer science department, the fact that QIP does not have have a proceedings is not good for the field of quantum computing. The lack of best paper and best student paper awards at conferences is even worse. But that’s just silly politics of, you know, getting a job. Does it matter to the science of the conference? No. Does it matter if you don’t want the field to disappear from the face of the earth because universities won’t hire faculty in the area? Probably. Of course people will argue that a QIP proceedings would prohibit STOC and FOCS submissions, but seeing as how exactly one quantum paper made it to FOCS this year…)

Consequence of the Concept of the Universe as a Computer

The ACM’s Ubiquity has been running a symposium on the question What is Computation?. Amusingly they let a slacker like me take a shot at the question and my essay has now been posted: Computation and Fundamental Physics. As a reviewer of the article said, this reads like an article someone would have written after attending a science fiction convention. Which I think was supposed to be an insult, but which I take as a blessing. For the experts in the audience, the fun part starts at the “Fundamental Physics” heading.