Reading List: Graph Isomorphism

Warning: this idiosyncratic list of reading material is in no way meant to be comprehensive nor does is it even guaranteed to focus on the most important papers concerning graph isomorphism.  Suggestions for other papers to add to the list are greatly appreciated: leave a comment!

The graph isomorphism problem is an intriguing problem. A graph is a set of vertices along with the edges connecting these vertices. The graph isomorphism problem is to determine, given the description of two graphs, whether these descriptions are really the same graph. For example, is it possible to relabel and move the vertices in the two graphs below so as to make them look the same?

Give up? Okay, well it wasn’t so hard this one, but yes these graphs are isomorphic 🙂 Usually we aren’t given two pictures of the graphs, but instead are given a succinct description of what vertices are connected to each other. This is given by either and adjacency matrix or and adjacency list.  The adjacency matrix of a graph with $latex n$ vertices is a $latex n times n$ matrix whose $latex (i,j)$th entry is the number of edges from vertex $latex i$ to $latex j$.  An algorithm is efficient for the graph isomorphism problem if it takes a running time that is polynomial in the number of vertices of the graphs involved.
One reason why graph isomorphism is interesting is it current status in computational complexity. It is not likely to be NP-complete (as this would cause a collapse of the polynomial hierarchy), yet despite a considerable amount of work, there is no known polynomial time algorithm for this problem. Graph isomorphism is known to be in the complexity class NP intersect co-AM, which is kind of similar to where the decision version of factoring lies (NP intersect co-NP.) This similarity, along with the fact that a natural generalization of the quantum problem solved by factoring (the hidden subgroup problem) is related to graph isomorphism (efficiently solving the hidden subgroup problem for the symmetric group would yield an efficient algorithm for graph isomorphism) has led to the hope that quantum computers might be able to efficiently solve graph isomorphism. I will admit, over the years, I have slowly but surely developed quite a severe of the graph isomorphism disease, albeit my case appears to be of the quantum kind. Like any disease, it is important to spread the disease. So here is a collection of readings about the graph isormophism problem.
One interesting aspect of the graph isomorphism problem is that there are classes of graphs for which there do exist efficient polynomial time algorithms for deciding isomorphism of graphs in these classes. Because one can get downbeat pretty quickly bashing your head against the problem, I think it would be nice to start out with an easy case: tree isomorphism.  The efficient algorithm for this is usually attributed to Aho, Hopcroft and Ullman (1974), but I really like

  • “Tree Isomorphism Algorithms: Speed vs. Clarity” by Douglas M. Campbell and David Radford, Mathematics Magazine, Vol. 64, No. 4 (Oct., 1991), pp. 252-261 (paper here)

It’s a fun easy read that will get you all excited about graph isomorphism, I’m sure.
Okay well after that fun start, maybe staying within the realm of polynomial time algorithms continues to feel good.  One of the classic ways to attempt to solve graph isomorphism is as follows: compute the eigenvalues of the adjacency matrices of the two graphs, and sort these eigenvalues.  If the eigenvalues are different, then the graphs cannot be isomorphic.  Why?  Well if the graphs are isomorphic, there exists a $latex n times n$ permutation matrix $latex P$ (a matrix made of zeros and ones that has only a single one in each row and in each column) such that $latex A_2=PA_1 P^{-1}$ (here $latex A_1$ and $latex A_2$ are the adjacency matrices of the two graphs.)  And recall that the eigenvalues of $latex X$ and $latex MXM^{-1}$ are the same for invertible matrices $latex M$.  Thus isomorphic graphs must have the same eigenvalues.  Note, however, that this does not imply that non-isomorphic graphs have different eigenvalues.  For example, consider the following two trees

These trees are clearly not isomorphic.  On the other hand, if you take the eigenvalues of their adjacency matrices you will find out that they are both given by

$latex \left{frac{1}{2} \left(-1-sqrt{13}\right),frac{1}{2} \left(1+sqrt{13}\right),frac{1}{2} \left(1-sqrt{13}\right),frac{1}{2} \left(-1+sqrt{13}right),0,0,0,0\right}$.

So there exists isospectral (same eigenvalues of the adjacency matrix) but non-isomorphic graphs.  Thus taking the eigenvalues of the graph isn’t enough to distinguish non-isomorphic graph.  In practice, however, this usually works pretty good.  Further, if we consider graphs where the maximum multiplicity of the eigenvalues (recall that an eigenvalue may appear multiple times in the list of eigenvalues of a matrix—the multiplicity is the number of times an eigenvalue appears) is bounded (that is the multiplicity does not grow with the graph size), then there is an efficient algorithm for graph isomorphism.  This was first shown for multiplicity one graphs, i.e. where all the eigenvalues are distinct, by Leighton and Miller.  This work is unpublished, but you can find the hand written notes at Gary Miller’s website:

  • “Certificates for Graphs with Distinct Eigen Values,” by F. Thomson Leighton and Gary l. Miller, unpublished, 1979 (paper here)

Another place to learn about this, without all the smudge marks, is at the Yale course site of Dan Spielman:

  • Spectral Graph Theory, Dan Spielman, Fall 2009 (course website , notes on multiplicity free graph isomorphism)

Here would be a good place to mention the general case for bounded multiplicity

  • “Isomorphism of graphs with bounded eigenvalue multiplicity” by Laszlo Babai, D. Yu. Grigoryev, and David M. Mount, STOC 1982 (paper here)

Though jumping in to that paper after the previous ones is a bit of a leap.
Okay, so now your all warmed up.  Indeed you’re probably so stoked that you think a polynomial time classical algorithm is just around the corner.  Time to put a damper in those thoughts.
First lets start with the paper that is, currently the best algorithm for general graphs.  Well actually for the original best algorithm, I’m not sure if there is an actual paper, but there is a paper which achieves a similar bound see

  • “Canonical labeling of graph” by László Babai and Eugene M. Luks STOC 1983 (paper here)

This describes a subexponential (or, err, what did Scott decide to call this?), $latex exp(-n^{frac{1}{2}+c})$, time algorithm for a graph with $latex n$ vertices.  Now as a reading list I don’t recommend jumping into this paper quite yet, but I just wanted to douse your optimism for a classical algorithm by showing you (a) the best we have in general is a subexponential time algorithm, (b) that record has stood for nearly three decades, and (c) that if you look at the paper you can see it’s not easy.  Abandon hope all ye who enter?
Okay, so apparently you haven’t been discouraged, so you’re pressing onward.  So what to read next?
Well one strategy would be to go through the list of all the graphs for which there are efficient known algorithms.  This is quite a list, and it is useful to at least gain some knowledge of these papers, if at least to no re-solve one of these cases (above we noted the paper for bounded eigenvalue multiplicity):

  • Planar graphs: “Linear time algorithm for isomorphism of planar graphs” by J.E. Hopcroft and J.K. Wong, STOC 1974 (paper here)
  • or more generally graphs of bounded genus: “Isomorphism testing for graphs of bounded genus” by Gary Miller, STOC 1980 (paper here)
  • Graphs with bounded degree: “Isomorphism of graphs of bounded valence can be tested in polynomial time” by Eugene M. Luks, Journal of Computer and System Sciences 25: 42–65, 1982 (paper here)

Along a similar vein, there is a subexponential time algorithm for strongly regular graphs that is better than the best general algorithm described above:

  • “Faster Isomorphism Testing of Strongly Regular Graphs” by Daniel A. Spielman STOC 1996 (paper here)

At this point, you’re probably overloaded, so a good thing to do when you’re overloaded is to find a book.  One interesting book is

  • “Group-Theoretic Algorithms and Graph Isomorphism ” by C.M. Hoffman 1982 (link here)

As you can see this paper was written before the best algorithms were announced.  Yet this book is fairly readable and begins to set up the group theory you’ll need if you want to conquer the later papers.  Another book with some nice results is

  • “The graph isomorphism problem: its structural complexity” by Johannes Köbler, Uwe Schöning, Jacobo Torán (1993) (buy it here)

where you will find some nice basic complexity theory results about graph isomorphism.
Well now that you’ve gone and starting learning about some computational complexity in relationship to graph isomorphism, it’s probably a good time to stop and look at actual practical algorithms for graph isomorphism.  The king of the hill here, as far as I know, is the program nauty (No AUTomorphism, Yes?) by Brendan D. McKay.  Sadly a certain search engine seems to be a bit too weighted to the word “naughty” (note to self, consider porn related keyword when naming a software package):

Nauty’s main task is determining the automorphism group of a graph (the group of permutations that leave the graph representation unchanged) but nauty also produces a canonical label that can be used for testing isomorphism.  Nauty can perform isomorphism tests of graphs of 100 vertices in about a second.  There is an accompanying paper describing how nauty works:

  • “Practical Graph Isomorphism”, by Brendan D. McKay, Congressus Numerantium, 30, 45-87, 1981. (paper here)

A canonical label for a graph is a function from graphs to a set of labels such that for two graphs the label is the same if and only if the two graphs are isomorphic.  This is what nauty produces: if you want to know whether the graphs are isomorphic you just compute the canonical forms for these graphs and compare the results.  If they have the same canonical form they are isomorphic, if they don’t they are not isomorphic.

Which leads one to thinking about other labeling schemes for solving graph isomorphism.  One of the simplest attempts (attempt because it does not work) to make a canonical labeling scheme is the following: begin by labeling the vertices of the two graphs by their degrees.  If you then sort this list and they are different sorted lists, then the graphs cannot be isomorphic.  Next one can replace the vertex label by a label made up of the multiset of the current vertex label and the labels of the neighboring vertices (recall a multiset is a set where entries can appear multiple times.)  One can then sort these multisets lexographically and compare the two sorted lists for both graphs.  Again if the lists are not the same the graphs are not isomorphic.  If you still haven’t discovered that the graphs are not isomorphic you can continue, but first, to keep the labels small, you should replace the lexographically ordered labels by a small integer representing the sorted order of the label.  Then one can iterate the construction of a new multiset labeling from these new integer labelings.  This is a simple scheme to implement.  Below, for example, I perform it for two very simple graphs.  First:

Next:

At which point we stop because if we compare the sorted lists of these multisets they are different (one has $latex {2,2,2}$ while the one on the right does not.)   One can show that the above labeling procedure will always stabilize after a polynomial number of iterations (can you see why?) but also it’s quite clear that it doesn’t work in general (i.e. it doesn’t provide a canonical labeling for all graphs) since it gets stuck right off the bat with regular graphs (graphs whose degree is the same across the entire graph.)  Here are two 3-regular graphs that are not isomorphic:

The above algorithm doesn’t even get off the ground with these non-isomorphic graphs 🙂  One can imagine, however, more sophisticated versions of the above algorithm, for example by labeling edges instead of vertices.  It turns out there is a very general set of algorithms of this form, known as the k-dimension Weisfeiler-Lehman algorithm, that work along the lines of what we have described above.  Boris Weisfeiler, as I mentioned in a previous post, went missing in Chile in 1985 under presumably nefarious circumstances.  For a good introduction to the WL algorithm, I recommend the paper

  • “On Construction and Identification of Graphs (with contributions by A. Lehman, G. M. Adelson-Velsky, V. Arlazarov, I. Faragev, A. Uskov, I. Zuev, M. Rosenfeld and B. Weisfeiler. Edited by B. Weisfeiler)”, Lecture Notes in Mathematics, 558 1976 (paper here)

The dimension $latex k$ in the WL algorithm refers to whether the basic object being considered are subsets with cardinality $latex k$ of $latex {1,2,dots,n}$.  If the k dimensional WL algorithm solved graph isomorphism for k constant, then this would give an efficient algorithm for graph isomorphism.  For many years whether this was true or not was not known, and a large body of work was developed (mostly in Russia) around this method, including the introduction of topics such as cellular algebras and coherent configurations.  However in 1991, Cai, Furer, and Immerman showed that this approach to efficiently solving graph isomorphism does not yield an efficient graph isomorphism algorithm.  The paper where they do that is very readable and gives a good history of the WL algorithm

  • “An optimal lower bound on the number of variables for graph identification” by Jin-Yi Cai, Martin Fürer, and Neil Immerman, Combinatorica, 12, 389-410 1991 (paper here)

At this point you might be thinking to yourself, “Hey, isn’t this the Quantum Pontiff?  Where is the discussion of quantum approaches to graph isomorphism?”  Okay, maybe that’s not what you’re thinking, but what the hell, now is as good as time as any to talk about quantum approaches to the problem.  There are two main approaches to attacking the graph isomorphism problem on quantum computers.  The first is related to the hidden subgroup problem and the second is related to state preparation and the swap test.  A third “spin-off”, inspired by quantum physics also exists, which I will also briefly mention.
Recall that the hidden subgroup problem is as follows:

Hidden Subgroup Problem (HSP): You are given query access to a function $latex f$ from a group $latex G$ to a set $latex S$ such that $latex f$ is constant and distinct on an left cosets of an unkown subgroup $latex H$.  Find $latex H$ by querying $latex f$.

Quantum computers can efficiently (efficiently here means in a time polynomial in the logarithm of the size of the group) solve the hidden subgroup problem when the group $latex G$ is a finite Abelian group.  Indeed Shor’s algorithm for factoring and discrete logarithm can be seen as solving such Abelian HSPs.  A nice introduction to the Abelian version of this problem, though it is now a little out of date is

A more up to date introduction to the problem is provided by

  • “Quantum algorithms for algebraic problems” by Andrew Childs and Wim van Dam, Reviews of Modern Physics 82, 1-52 2010 arXiv:0812.0380

What is the connection to the graph isomorphism problem?  Well the basic result is that if one could efficiently solve the HSP over the symmetric group (or the wreath product group $latex S_n wr S_2$) then one would be able to efficiently solve graph isomorphism.  A place to find this is in

  • “A Quantum Observable for the Graph Isomorphism Problem,” by Mark Ettinger and Peter Hoyer (1999) arXiv:quant-ph/9901029

This paper establishes that there is a measurement one can perform on a polynomial number of qubits that solves the graph isomorphism problem.  Unfortunately it is not known how to efficiently implement this measurement (by a circuit of polynomial depth.)  Even more unfortunately there is a negative result that says that you really have to do something non-trivial across the entire system when you implement this measurement.  This is the culminating work reported in

  • “Limitations of quantum coset states for graph isomorphism” by Sean Hallgren, Cristopher Moore, Martin Rotteler, Alexander Russell, and Pranab Sen, STOC 2006 (paper here)

At this point it seems that new ideas are needed to make progress along this line of attack.  I have tried my own pseudo-new ideas, but alas they have come up wanting.
At this point it is useful to mention that there is a variation on the hidden subgroup problem, the hidden shift problem, which is arguably more naturally connected to the graph isomorphism problem.  You can learn about this here

  • “On the quantum hardness of solving isomorphism problems as nonabelian hidden shift problems,” by Andrew Childs and Pawel Wocjan, Quantum Information and Computation, 7 (5-6) 504-521, 2007 arXiv:quant-ph/0510185

I could go on and on about the hidden subgroup problem, but I think I’ll spare you.
Beyond the hidden subgroup problem, what other approaches are there to finding efficient quantum algorithms for the graph isomorphism problem?  A lesser studied, but very interesting approach relates graph isomorphism to state preparation.  Let $latex A_1$ and $latex A_2$ denote the adjacency matrices of the two graphs to be tested.  Now imagine that we can create the states

$latex |P_irangle = sqrt{|Aut(G_i)| over n!} sum_{P} |PA_iP^{-1}rangle$

where $latex Aut(G_i)$ is the automorphism group of graph $latex G_i$, and the sum is over all $latex n times n$ permutation matrices.  Now notice that if $latex G_1$ and $latex G_2$ are isomorphic, then these two states are identical.  If, on the other hand, $latex G_1$ and $latex G_2$ are not isomorphic then these states are orthogonal $latex langle P_1|P_2rangle=0$, since the superpositions above cannot contain the same term or this would yield and isomorphism.  Using this fact one can use the swap test to solve graph isomorphism…given the ability prepare $latex |P_1rangle$ and $latex |P_2rangle$.  (See here for a discussion of the swap test.)  Unfortunately, no one knows how to efficiently prepare $latex |P_1rangle$ and $latex |P_2rangle$!  A good discussion, and one attack on this problem is given in the paper

  • “Adiabatic quantum state generation and statistical zero knowledge” by Dorit Aharonov and Amnon Ta-ShmaSTOC 2003 arXiv:quant-ph/0301023

Finally let me mention a “quantum inspired” approach to graph isomorphism which has recently led to a sort of dead end, but that I find interesting.  This is the approach is described in

The basic idea is as follows.  As I described above, using the spectra (the list of eigenvalues) of an adjacency matrix is not enough to distinguish non-ismorphic graphs.  Now the adjacency matrix is closely related to random walks on the graph being considered.  Indeed from the adjacency matrix and diagonal matrix one can construct what is called the Laplacian of a graph and this is the infinitesimal generator of a random walk on the graph (I’m being sketchy here.)  So thinking about this, one can ask: well what graph describes two random walkers?  Well if these walkers take turns moving, then one can see that two random walkers on a graph walk on the graph described by

$latex A otimes I  + I otimes A$

where $latex A$ is the original one walker adjacency matrix.  But of course the spectra of this new walk doesn’t help you in making a better invariant for graphs as the eigenvalues of this new walk are just the various sums of the prior eigenvalues.  But, aha!, you say, what if we think like a quantum physicist and make these walkers either fermions or bosons.  This corresponds to either anti-symmeterizing the above walk or symmeterizing it:

$latex S_{pm} (A otimes I  + I otimes A) S_{pm}$

where $latex S_{pm}={1 over 2}(I+SWAP)$ where $latex SWAP$ swaps the two subsystems.  Well it is easy to check that this doesn’t help either: if you look at all of the eigenvalues of the fermion and boson walks you really just have the original eigenvalue information and no more.  What Terry did, however, was to think more like a physicist.  He said: well what if we consider the walkers to be bosons but now I make them what physicists call hard-core bosons (I wouldn’t recommend googling that): bosons that can’t sit on top of each other.  This means that in addition to symmetrizing the $latex A otimes I  + I otimes A$ matrix, you also remove the part of matrix where the two walkers sit on the same vertex.  Terry shows that when he does this for certain non-isomorphic strongly regular graphs that are isospectral, if he considers three walkers, the eigenvalues are no longer the same.  Very cool.  Some follow up papers examined this in more detail

  • “Symmetric Squares of Graphs” by Koenraad Audenaert, Chris Godsil, Gordon Royle, Terry Rudolph Journal of Combinatorial Theory, Series B, 97, 74-90 2007 arXiv:math/0507251
  • “Physically-motivated dynamical algorithms for the graph isomorphism problem” by Shiue-yuan Shiau, Robert Joynt, and S.N. Coppersmith, Quantum Information and Computation, 5 (6) 492-506 2005 arXiv:quant-ph/0312170

(Also note another line of sort of similar investigation arXiv:quant-ph/0505026.)  So does this method lead anywhere?  Unfortunately the answer appears to be negative.  Here is a paper

  • “Spectra of symmetric powers of graphs and the Weisfeiler-Lehman refinements” by Alfredo Alzaga, Rodrigo Iglesias, and Ricardo Pignol arXiv:0801.2322

that demonstrates that the $latex k$ hard-core boson walker matrix provides no more information than the $latex 2k$ dimensional WL algorithm (which Cai, Furer, and Immerman showed cannot work.  Strangely it appears that Greg Kuperberg, like Babe Ruth before him, called this one.  Nice!)  Recently a different approach has also emerged to showing that this doesn’t work

  • “Non-Isomorphic Graphs with Cospectral Symmetric Powers” by Amir Rahnamai Barghi and Ilya Ponomarenko, The Electronic Journal of Combinatorics, 16(1) R120 2009 (paper here)

using the nice theory of schemes.  So a good physicist inspired idea, but so far, no breakthrough.
Well I’ve gone on for a long while now.  Perhaps one final reading to perform is related to graph isomorphism complete problems.  Since graph isomorphism is not known to be in P nor is it known to be NP-complete, it is “natural” to define the complexity class related to graph isomorphism, GI, which is made up of problems with a polynomial time reduction to the graph isomorphism problem.  Then, similar to NP-complete problems, there are GI-complete problems.  Wikipedia has a nice list of such problems, but it doesn’t contain my favorite.  I like this one because it doesn’t seem similar to graph isomorphism upon first reading:

  • “The Complexity of Boolean Matrix Root Computation” by Martin Kutz, Theoretical Computer Science, 325(3) 373-390, 2004 (paper here)

Sadly Martin Kutz passed away in 2007 at what appears to be a quite young age, considering his condensed publication record.  Read his paper for a nice square root of matrices problem that is graph isomorphism complete.
So see how much fun studying graph isomorphism can be?  Okay, for some definition of fun!  Please feel free to post other papers in the comment section.

Missed This: New John Baez Blog

Hmm, I’m totally out of it as I missed that John Baez, who “blogged” before blogging was the incredibly hip thing to do (which lasted for exactly 2 seconds in 2006?) has a new blog, a new two year visiting position in Singapore, and a new focus.  From his first post:

I hope we talk about many things here: from math to physics to earth science, biology, computer science, economics, and the technologies of today and tomorrow – but in general, centered around the theme of what scientists can do to help save the planet.

Quick, to the RSS feeder!

Cambridge Postdoc

Quantum postdoc at Cambridge:

Post-doctoral Research Associates in Quantum Computing, Quantum – Information Theory & Foundations
Salary: £27,319-£35,646
Limit of tenure: 2 years
Closing date: 31 August 2010
The Department invites applications for two post-doctoral research positions to commence on 1st October 2010 or later by agreement. The successful candidates will be associated with the Centre for Quantum Information and Foundations (formerly Centre for Quantum Computation) of the University of Cambridge (see http://cam.qubit.org).
Applications are especially welcomed from highly motivated researchers with a PhD in mathematics, theoretical physics or theoretical computer science, and a strong background in any of the following areas: quantum computation, quantum complexity theory,
quantum cryptography, quantum information theory, quantum foundations (especially in its relation to any of the afore-mentioned areas). For any further queries on the scientific details and requirements of the post, please contact Professor Richard Jozsa (R.Jozsa [turnthisinto@] damtp.cam.ac.uk), or Dr Adrian Kent, (A.P.A.Kent[turnthisinto@]damtp.cam.ac.uk).
Applications should include a full CV, publications list, a summary of research interests (up to one page), and a completed form CHRIS6 Parts I and III (downloadable from http://www.admin.cam.ac.uk/offices/personnel/forms/chris6/).
Applications quoting reference LE06886 should be emailed or posted (to arrive by 31 August 2010) to: Ms Miranda Canty, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB (Email: mlc59 [turnthisinto@]hermes.cam.ac.uk).
Applicants should also arrange for three professional references to reach Ms Canty by 31 August 2010.
Quote reference: LE06886
The University values diversity and is committed to equality of opportunity. The University has a responsibility to ensure that all employees are eligible to live and work in the UK.

Book: The Myths of Innovation

Last week I picked up a copy of The Myths of Innovation by Scott Berkun. It’s a short little book, clocking in at 256 pages, paperback. The subject is, well, read the damn title of the book, silly! Berkun picks apart the many different myths that exist around innovation: epiphany, lone inventors, and many of the stories we tell ourselves after the fact about the messy process of innovation. It’s probably fair to say none of the insights provided by Berkun is all that shocking, but in a nice collected form you really get the point that we tell ourselves a lot of funny stories about innovation. My first thought upon reading the book was “oh, this book is for curmudgeons!” But upon reflection, perhaps this is exactly opposite. Curmudgeons will already know many of the myths and be curmudgeonly about them: it is the non-curmudgeonly among you who need to read the book 🙂
But one point that Berkun makes is something I heartily concur with: that laughter can be a sign that innovation is occurring (dear commenter who is about to comment on the causal structure of this claim, please reread this sentence.) As a grad student in Berkeley I participated in a 24 hour puzzle scavenger hunt around nearly all of the SF Bay Area. At each new location a puzzle/brainteaser would be given whose solution indicated the next location in the puzzle hunt. At many of these locations we would start working on the puzzle and someone would suggest something real crazy about the puzzle “hmmm, I bet this has something to do with semaphore” because, well the chess board colors are semaphore colors. And we would all laugh. Then someone would think to actually check the idea that we all laughed about. And inevitably it would be the key to solving the damn puzzle. After a few stops, we noticed this and so anytime someone would say something we would laugh at we’d have to immediately follow up on the idea 🙂 But this makes complete sense: insight or innovation occurs when we are, by definition, pushing the limits of what is acceptable. And laughter is often our best “defense” in these situations. Further laughter has a strong improv component: the structure of what is funny requires you to accept the craziness behind the joke and run with it. Who knows where a joke may take you (as opposed to this paragraph, which is going nowhere, and is about to end.)
Finally I wish every reviewer of papers and grants would read this book and especially the reviewers who said one of my grant applications was just too speculative for the committee’s taste 😉
And a note to myself when I get a bad review about something I really think is the bees knees: reread this book.

Does the arXiv Forbid Posting Referee Reports?

ArXiv:1007.3202 is a paper whose conclusions I do not agree with (well actually I do think the original EPR paper is “wrong”, but not for the reasons the author gives!)  The abstract of the paper is as follows:

EPR paper [1] contains an error. Its correction leads to a conclusion that position and momentum of a particle can be defined precisely simultaneously, EPR paradox does not exist and uncertainty relations have nothing to do with quantum mechanics. Logic of the EPR paper shows that entangled states of separated particles do not exist and therefore there are no nonlocality in quantum mechanics. Bell’s inequalities are never violated, and results of experiments, proving their violation, are shown to be false. Experiments to prove absence of nonlocality are proposed where Bell’s inequalities are replaced by precise prediction. Interpretation of quantum mechanics in terms of classical field theory is suggested. Censorship against this paper is demonstrated.

Okay, fine, the paper makes some pretty astounding claims (at one point I believe the author simply rediscovers the detector efficiency loop-hole in Bell inequality experiments), but that’s not what really interests me.  What really interests me is the authors claim of censorship.  In particular the paper reports on the authors attempt to submit this paper to a workshop, QUANTUM 2010, whose proceedings would appear in the “International Journal of Quantum Information” and the rejection he received.  Okay, fine, standard story here.  But then the author gives a synopsis of the referee reports, followed by, I think, a more interesting claim:

I am sorry that I did not put here the full referee reports. The ArXiv admin forbidden to do that. I was told that anonymous referee reports are the subject of the copy right law. It is really terrible, if it is true. The referee report is a court verdict against my paper. Imagine that a court verdict is a subject of the copyright law. Then you would never be able to appeal against it. I think that the only punishment to dishonest and irresponsible referees is publication of their repots. It is so evident! But we see that dishonesty and incompetence are protected. I do not agree with such a policy, however I have nothing to do but to take dictation of ArXiv admin.

Is it really true that the arXiv forbids publishing referee reports?  Do referees really retain copyright on the referee reports?  And if so, should it be this way or should referees have to give up copyright on their reports?  Inquiring minds want to know!

Things I've Done That Make My Life Better

These are all very selfish, so sue me.  Readers who don’t like navel gazing should stop reading now and make sure not to gaze at their own navels.  Things I’ve done recently that have improved my life (brought to you by the word “shrill”):

  • Reading and listening to less politics, and when reading politics not responding to what appears to me to be an amazing lack of logic and/or grasp of the complexity of a situation.  I mean I enjoy political views that I don’t agree with, but only if they aren’t built of a tower of logical fallacies.   I’ve decided not to respond to or comment on a multitude of blogs, on all sides of the political spectrum, especially those that begin with the assumption that they are absolutely completely correct, and have the answers if only you’d just listen to their single solution.  It’s not that I don’t care about politics, it’s just that I don’t think what you have to say adds that much.  Sorry.
  • Learned a lot about graph isomorphism.  Did you know that one of the seminal contributors to understanding graph isomorphism algorithms, Boris Weisfeiler, went missing in Chile during 1984 under nefarious circumstances?
  • Tried to perform work that will be rejected.  This one has succeeded(!), though I can’t say I feel absolutely good about it.  But still it has made my life a bit better because I don’t feel guilty about not doing something a bit different.  I really really really like my recent papers, but referees differ in that opinion.  These referee’s have also taught me a lot about how I shouldn’t review papers, so I thank them very much!  I’ve also realized that quantum computing is now old enough to have curmudgeon reviewers from within the field.  Congrats, you’ve just made the field a lot less happy, Mr. Curmudgeon Reviewer, I hope you have a good time in your old age sitting in your air conditioned tenured position, counting your citations!
  • Read Born to Run: A Hidden Tribe, Superathletes, and the Greatest Race the World Has Never Seen, which, while it gets a little iffy in it’s breathless report of the science of barefoot running at the end and shows its magazine article roots, is a fun read about extreme running.
  • Not responded to blog comments in which the commenter remarks about how “X” has made the commenter so angry that they will “no longer be reading this blog.”  Okay, Mr. Angry Commentor, thank you for sharing, but is the world a better place with your comment?  Probably not.
  • Said “no” to reviews.  Yes, I am a bad citizen.  I’ve only reviewed something like 5 times the number of papers I’ve submitted.  I’ve decided I will only review papers that I feel I would be an excellent reviewer for the paper, not just a good one.  And, yes I am much less inclined to review a paper if it comes from a publisher who seems to be part of a journal system that is highly dysfunctional.
  • Taught myself Ruby.  Not hard to do, and lots of fun.  Next up is understanding Rails.  Plus I got to read Why’s (Poignant) Guide to Ruby, which I highly recommend.
  • Disconnected from news that isn’t really news.  It’s addicting listening to what passes for news these days, but I’d rather spend that time learning something outside the redundant reporting that is nearly entirely predictable from past news articles. Podcast provide a vastly more interesting audio distraction.
  • Expanded my ability to cook into new and exciting regimes.  I’ve now got a killer spinach salad, a out-of-this-world pork tenderloin with carmelized pears recipe, as well as some pretty good pizza mastered.  Next up: must learn how to produce outrageously good and spicy BBQ!
  • Spent lots of time with Baby Bacon, trying to pen down the points about life that are important, thinking hard about what I want to do in the future, and what positive moves I can act on going forward.  Yeah, I’m a touchy-feely  hippie guy.  So sue me 🙂

In Search of Plugins

Speaking of blogging technology: plugins that I haven’t seen for WordPress but that would be fun (i.e. thing I’d love to do if I were king of infinite time.)

  • A plugin to allow for comments INSIDE of a blog post.  The reader would be able to click at, say the end of a paragraph, and have their comment inserted there.  The comment would display collapsed (as a small icon or such) but when clicked on would expand for an inline comment.  Comment replies could be posted there as well.  An option to display all comments (inline and after the post) should be available as well.  Of course one could apply this recursively 🙂
  • Automagic arXiv linker.  I’m amazed this doesn’t exist.  I should be able to cut and paste an arXiv reference and it should automatically link to this (including maybe the option to add [pdf] after the paper for a direct link to the pdf.)
  • A plugin to list the tweets that have been made about this post.  I have tweetmeme installed, which gives a link to the tweets, but if there is a plugin that lists, inline, tweets, I haven’t found it.

Reimagining Science Networks

Scienceblogs, the science network that was my old (where “old” = “a few days ago”) haunt, is in revolt.  Okay, well maybe the network is not in revolt, but there is at least a minor insurgency.  Yesterday, the amazing force of blogging known as @Boraz, left the network (be sure you have more than a few minutes if you are going to read Bora’s goodbye letter.)  Today, the biggest fish of them all, PZ Myers has gone on strike (along with other Sciencebloggers.)  Numerous other bloggers have also jumped ship (a list is being kept by Carl Zimmer here.)  This is both sad, as I personally think the Scienceblogs network does contribute significantly to spreading the joys and tribulations of science, but also a bit exciting for, as Dave Munger points out, this also represents the prospect of new networks arising and hopefully pushing the entity that is known as the science-blogosphere forward.
I myself am not much of a blogger.  What I write here is for my own personal amusement (so if you don’t like it, well I don’t give a damn, thankyouverymuch) and, frankly, to distract my fellow quantum computing researchers from getting any work done (ha!)  I do enjoy writing (literature major, you know) and also enjoy trying to write coherently about science, and sometimes, as a consequence, I get read by people who aren’t here just to hear about the latest and greatest in quantum channel capacities.  That’s great, but I don’t really consider science blogger as my defining characteristic (my self image, such as it is, is more in the line of a hack who has somehow managed to remain in science—despite being almost a decade out of graduate school without a tenure track position due in large part to being stubborn as hell.  But that’s another story.)
But, even though I don’t consider myself very bloggerrific, having had a seat at the Scienceblogs table gave me an up front look at, to use a silly term, new media, and in particular at the notion of a science network.  So to me, following Munger’s post, the interesting question is not what will become of Scienceblogs in its current form, but how will the entities we call science networks evolve going forward.  Since there are a large number of Sciencebloggers jumping ship, it seems that now would be a good time for a new media science mogul to jump into the fray and scoop up some genuinely awesome bloggers.  So the question is, what should a science network look like?
To begin, I can start with Pieter’s comment a few days ago:

…I never fully understood the need for successful bloggers to join an umbrella organization. Did you get more readers when you moved to Pepsiblogs (good one!)?

That is exactly what I thought when I was asked, clearly by some clerical error, to join Scienceblogs!  Having joined, I can say that yes, it did increase my blog traffic.  But I think a science network also adds something else.
First of all, there is the fact that there is a front page which contains significant “edited” content.  It is edited in the sense that the powers that be have a large say in selecting what appears there in a highlighted mode.  This great because even the best bloggers, I’m afraid, generate a fair amount of posts which aren’t too exciting.  A discerning eye, however can grab the good stuff, and I regularly go to the front page to see what exciting is being blogged about.  I’m not sure that the front page of Scienceblog is the best way of providing an edited version of a blog network, but I do think that it is heading in the right direction.  So in thinking about moving forward, I wonder how one could change this editing and give it more value.  For instance, is the fairly static setup of the front page the right way to go, or should there be a more dynamic front page?
Another important property of a science network is in building discussion, and by discussion I don’t just mean a bunch of people agreeing with each other.  For example, Scienceblogs has a “buzz” where articles on a featured topic are posted on the front page.  Sometimes this content presents a unified view of a topic, but mostly you get a terrific variety of opinions about a subject.  Now I won’t argue that this diversity of opinion is huge: for instance you aren’t likely to find the Christian view on topological insulators, but you are likely to get the opinion of a large number of scientists or science journalists from a variety of areas.  This solves, for me, one of the worst problems with my blog reading: only following blogs for which I am predetermined to agree with the blogger.  Further this content gives rise to a genuine discussion among the bloggers in that they actually will read what others have written as opposed to just sitting on an isolated island (okay well I rarely read what even I’ve written, hence the horrible typos and grammatical gaffs that liter my writing.)
Third a science network like Scienceblogs serves as a proxy for a certain amount of quality.  Despite me trying to bring this quality down, I would say that some of the best science bloggers around have or have had a blog at Scienceblogs and this lets the network serve as a proxy for having to read a bunch of blog posts to see if the person has something interesting to say, or whether they are not worth your time.
So those are at least some of the reasons that a science network is good.  I must say, in thinking about these reasons, however, that I can’t completely convince myself that these amount to enough to justify the science networking idea.  Many high quality bloggers get along just fine without such a network.
Which brings me to the real subject of this post: how would I redesign Scienceblogs?
Well the first thing that comes to mind is better tech support.  Okay, just kidding.  Kind of.
Actually I do think there is a valid point in this dig at tech support.  One of the hardest things for me while I was at Scienceblogs was not being able to dig around and modify my blog in the sort of way I can do on my own hosted server.  Why is this important?  Well, for example, Scienceblogs does not currently have a mobile version of the website.  (Mea culpa: at one point, back when I was writing iPhone apps, I emailed the powers that be at Scienceblogs asking if they wanted me to design an iPhone app for them.  I got crickets back in response.  Later this came up in discussion among Sciencebloggers and the powers that be emailed me asking for more details.  This was in the middle of the impending arrival of baby Pontiff, so I never followed back up on this.  I feel bad for not doing this, but it seems that if the management was really serious about this they could have pursued numerous other, um, really qualified people.  Note that it took me about 30 minutes to get a mobile read version of my blog setup when I moved back here, and yes this is different than an iPhone app.)  But more importantly, technology has that important property that it is constantly changing.  Anyone who wants to build a network of science blogs should probably seriously consider that the infrastructure they are building will be out of date every few years or so and need major upgrades at a fairly high rate.
For instance, Scienceblogs should have been among the first to offer an iPhone app, an Android app, an iPad app.  Scienceblogs should think of ways to incorporate its tweeting members: as it is, as far as I know, Scienceblogs doesn’t even keep a list of its members who tweet.  Scienceblogs, a network about science, doesn’t even have LaTeX support for heaven’s sake, let alone, as far as I can tell, plans for how things like html5 will change what one can do on a website.  What will happen to Scienceblogs when technology adapts? Will it adapt too?
So I think, if I were going to start a new science network I would start with an incredibly dedicated hacker.  A quickly adaptable platform is a prereq and if you don’t start with a good base, well then you are just going to be out of date pretty quickly.
But of course there is more to a platform than just the tech behind the scenes.  There is also the content.  I have a lot of admiration for the people who have been the behind the scenes editors at Scienceblogs and I think this is part of the network that worked the best.  I do wonder, however, if they have enough editorial control: that is it would seem to me that they should have an even more expansive roll in the network.  And it’s not clear to me that there should be as large of a separation between their magazine Seed and the Sciencebloggers.  I would wager that many people don’t even know that Scienceblogs is related to Seed or that Seed exists at all.  And here is where I think one needs to get a little radical.  Seed should (as roughly suggested by Bora), I think, give up it’s print magazine and fold Seed into Scienceblogs.  High quality traditional media pieces like those Seed produces are great.  So why can’t they be part of the network in an integrated way?
Well these are just my silly initial thoughts about re-imagining science networks, when I should be busy changing diapers.  And certainly I don’t know what I’m talking about.  But read the disclaimer in the upper right of this blog.  So don’t say I didn’t warn you!

Australia Research Council Centers

Looking way over that big pond called the Pacific (50 percent bigger than the Atlantic, take that stogy East Coasters) I see that the Australian Research Council has funded two (count’em two!) big quantum centers (the OED tells me that the spelling is “center” and not “centre”—if you want to be historically correct for this definition of the word.  I mean, come on upside down people: why follow Chaucer, when you could follow Humphrey Prideaux?  Break the bonds of the Queen!)  The funded centers are the ARC Centre of Excellence for Engineered Quantum Systems and ARC Centre of Excellence for Quantum Computation and Communication Technology.  Both were funded at $24.5 million Australian for five years.  Congrats to the teams involved!

Mobile Pontiff

I’ve just installed the WordPress Mobile Pack Plugin. This should give uses from mobile device a faster, smaller screen adapted version of this blog. I’ll bet one of the two readers of this blog have a smartphone, so please let me know if you have any problems with the mobile theme.