Radio Free Albemuth Trailer from Elizabeth Karr on Vimeo.
(I wouldn’t say RFA is Philip K. Dick’s most controversial novel. VALIS would get my vote for that title.)

O brave new quantum world!
Radio Free Albemuth Trailer from Elizabeth Karr on Vimeo.
(I wouldn’t say RFA is Philip K. Dick’s most controversial novel. VALIS would get my vote for that title.)
TEDxWaterloo talk. See it here and start changing the way you do science:
Recently computer scientist Leslie Valliant won the ACM’s Turing Award, considered one of the most prestigious prizes in computer science. Valliant is famous for many results, not the least of which are his results on the Permanent of a matrix. Over at the Godel’s Lost Letter, the iced tea man has a nice collection of interesting permanent related complexity facts. Recall that the permanent of a n by n matrix [latex]A[/latex] is given by
[latex]{rm per} A = sum_{pi in S_n} prod_{i=1}^n A_{i,pi(i)}[/latex]
where [latex]S_n[/latex] is the symmetric group on n elements and similarly the determinant of a n by n matrix [latex]A[/latex] is given by
[latex]{rm det} A = sum_{pi in S_n} prod_{i=1}^n (-1)^{{rm sgn} pi} A_{i,pi(i)}[/latex]
where [latex]{rm sgn} pi[/latex] is 0 if the permutation is made up of an even number of transpositions and 1 if the permutation is made up of an odd number of transpositions. One day I was sitting in my office when a physics undergraduate came by (a former ex-sailor from Alaska) and said…”Hey Dave, what if we replace the function in front of each term in the permanent and determinant by a character of a symmetric group irrep?” Which of course knocked me off my chair, first because what undergrad physics major knows about symmetric group irreps and second because I had never thought about this interesting twist on the permanent and determinant.
After a little google foo later, we quickly found the answer. For an n by n matrix [latex]A[/latex] the immanant of a matrix is given by
[latex]{rm imm_lambda A } = sum_{pi in S_n} prod_{i=1}^n chi_{lambda}(pi) A_{i,pi(i)}[/latex]
where [latex]lambda[/latex] labels the irrep of [latex]S_n[/latex] and [latex]chi_lambda(pi)[/latex] is the character of the irrep [latex]lambda[/latex] at group element [latex]pi[/latex]. Recall that the irreps of the symmetric group [latex]S_n[/latex] are labeled by partitions of [latex]n[/latex]. A partition of [latex]n[/latex] is a series of decreasing positive integers that sums to n, [latex](lambda_1m lambda_2, dots,lambda_r)[/latex] with [latex]lambda_1 geq lambda_2 geq dots geq lambda_r[/latex] such that [latex]sum_{i=1}^r lambda_i = n[/latex]. The partition corresponding to [latex](n)[/latex] corresponds to the trivial irrep in which [latex]chi_{(n)}(pi)=1[/latex], and on the opposite end of the spectrum, the partition corresponding to [latex](1,1,dots,1)[/latex] corresponds to the alternating irrep where [latex]chi_{(1,1,dots,1)}(pi)=(-1)^{{rm sgn} pi}[/latex]. Thus we see that the permanent and determinant are at the end of a spectrum of polynomials known as the immanants.
One of Valiant’s most well known results is that evaluating the permanent of a matrix with 0 and 1 as its possible matrix entries is #P complete, a.k.a really hard. On the other hand evaluating the determinant is not computationally difficult at all. At first this seems odd because a determinant has those minus signs which you would think would make it easier and not hard, but alas this is not so. So what about the computational complexity of the immanant? The Computational Complexity of Immanants by Peter BĂĽrgisser (sorry I could only find a paywalled version) shows that there is a since in which certain immanants are also #P complete to evaluate. Rough the idea is that if one defines a class of immanants that have partitions that have a “step” in them that grows polynomially in [latex]n[/latex] (the step for the permanent is [latex]n[/latex]) then these will be #P complete to evaluate.
So what good are immanants? Well I’m sure mathematicians have some fine uses for them. One interesting thing to note is that immanantal polynomials of adjacency matrices of graphs give you graph invariants (for the determinant this is the same as saying that the eigenvalues of the adjacency matrix are a graph invariant.) However it is known that this, like the eigenvalues, is not a complete set of graph invariants and so is not a route towards efficiently solving graph isomorphism. So no real use there, but I’m sure an object so symmetric must be of some use, no?
If you know what Katamari Damacy is, then you will love http://kathack.com.
(The script was created by University of Washington students Alex Leone, David Nufer, and David Truong for the 2011 Yahoo HackU contest. See, dear physicists, the benefits of living in a computer science department 🙂 )
One of my favorite software licenses is the Beerware license, here in a version due to Poul-Henning Kamp:
/*
* --------------------------------------------------------------
* "THE BEER-WARE LICENSE" (Revision 42):
* <> wrote this file. As long as you retain
* this notice you can do whatever you want with this stuff.
* If we meet some day, and you think this stuff is worth it,
* you can buy me a beer in return Poul-Henning Kamp
* --------------------------------------------------------------
*/
Recently I came across a license of a form I’d never seen before, this one for one of the top graph isomorphism software programs, nauty:
Copyright (1984-2010) Brendan McKay. All rights reserved. Permission is hereby given for use and/or distribution with the exception of sale for profit or application with nontrivial military significance. You must not remove this copyright notice, and you must document any changes that you make to this program. This software is subject to this copyright only, irrespective of any copyright attached to any package of which this is a part.
Just as there are socially conscious mutual funds, it also appears that there are socially conscious software licenses! Who knew?
In this Saturday’s New York Times, in an article titled The Chasm Between Consumers and the Fed, I found the most amazing chart:

Of course I am not a financial journalist, so I have absolutely no understanding of the gigantic amoeba-like-shaded-area in this chart. But it looks very cool and very much like it represents something about which the article has much to say. Sadly, however, the New York Times does not provide the methodology it used in obtaining the amazing fact that six of the points can be grouped together while those other two points are excluded from the party. What astounding mathematical finance model did the Grey Lady use to come up with this plot (I’ll be it involves Ito calculus)?
Frustrated by the lack of transparency, I decided that it would be best if I tried to come up with my own methods and models for obtaining this graph. My first attempt, after scouring the economics literature and using some advance methods (related to integrating over Banach spaces) was the following

As you can see this model seems to pick out the overall rate of return as the defining characteristic. After much great reflection, and reacquainting myself with some obscure results from the theory of hyperbolic partial differential equations and new deep learning techniques from machine learning, I was able to tweak my model a bit and obtained the following

Now this is a beautiful plot, but it clearly does not reproduce the graph from the New York Times. What exactly, was I missing in order to obtain the giant amoeba of correlation?
But then I remembered…I’m not a financial journalist. I’m a physicist. And so, I took a look at the stats notes I took as a physics major at Caltech, quickly plugged in some numbers, and obtained a new, reality based, version of the plot

Well it’s not the New York Time plot. But I like it a lot.