Financial Markets as Test of Fundamental Physics

From an article in the New York Times today:

More recently, executives have blamed very unusual events — known to experts as 25-standard deviation moves, things expected only every 100,000 years — for the disruptions that computers could not predict.

Um, according to my calculation, a 25 standard deviation move on a normal distribution has a chance of occuring which is about [tex]$6 times 10^{-138}$[/tex]. This means then that if the above statement is correct (100,000 years equals one 25 standard deviation move), then financial transactions occur at a rate of one transaction every [tex]$10^{-117}$[/tex] seconds. This is, you know, only [tex]$10^{-73}$[/tex] times shorter than the Planck time relevant to a quantum theory of gravity.
Which is great if your a physicist! Forget about building the Large Hadron Collider, just use the financial markets to test your theory of quantum gravity! Maybe the recent credit crunch is evidence for the Higgs boson or a selectron? I mean, seriously, we already have huge numbers of physicists working in the financial sector. Maybe they were on to something we didn’t notice and they’re really doing fundamental physics using this incredible financial transaction speed (and even making money while they’re poor thesis advisors slave away in tenured at a state institute land 🙂 )
More seriously, I wonder if one could predict the future behavior of a financial instrument by examining the incidences of mathematical jargon in the instruments literature and the percentage of times the statement actually makes sense (would you invest if you found a clarifier about 25 standard deviation moves in a hedge funds plan?)

Panama New Paper Dance

Paper dance. Delayed posting here about the paper dance because I did the paper dance in Bocas de Toro in Panama. “Panama! Panama ah ah ah! … Model citizen, zero discipline.” New paper, arXiv:0708.1221 (scirate here):

Title: Caching in matrix product algorithms
Authors: Gregory M. Crosswhite and Dave Bacon
Abstract: A new type of diagram is introduced for visualizing matrix product states which makes transparent a connection between matrix product states and complex weighted finite state automata. It is then shown how one can proceed in the opposite direction: writing an automata that “generates” an operator gives one an immediate matrix factorization of it. Matrix product factorizations are shown to have the advantage of reducing the cost of computing expectation values by facilitating caching of intermediate calculations. Finally, these techniques are generalized to the case of multiple dimensions

Interesting Music Choice

Because we all love medical animations set to a techno beat:
[youtube]http://www.youtube.com/watch?v=AjhbZ8zrtKw&mode=related&search=[/youtube]

ARPA is the New Lowercase Letter "i"

It looks like the America COMPETEs act is making its way out of Congress (See here). One interesting part of the science and technology legislature being considered is the creation of ARPA-E, an Advanced Research Projects Agency for Energy (initial budget of $300M.) Similarly, a while back (former) Director of National Intelligence John Negroponte discussed an intelligence effort called IARPA. Both of these names are in reference to DARPA, the Defense Advanced Research Projects Agency, which used to be called just ARPA and was responsible for funding the ARPANET which eventually became the Internet (A minor funding sucesss, you might say 🙂 )
Hmm, it seems to me that ARPA is the new lower case letter “i” (“i” being the new lowercase “e”, of course.) Anyone seen any more ARPAs out in the wild? More importantly, when will I get my ARPAphone?

Research Grant Dollars

Over at Life as a Physicist Gordon Watts notes the email we received here at the University of Washington from our university president yesterday which told us that last year, for the first time, UW received over a billion dollars in research grants. Only John Hopkins receives more money. Of course the main reason for this is the UW’s school of medicine which brings in over half a billion dollars in funding every year. Holy moly that’s a lot of research grant dollars.
This got me thinking about research funding and I thought an incredibly stupid thought. Which is of course what this blog is for: sharing my incredibly stupid thoughts…in the public…so everyone can laugh at me. A big issue which comes up in physics graduate programs is the fact that the supply chain for academic jobs in physics is severely out of wack. The number of faculty positions versus the number of people who want these jobs is the source of an incredible amount of frustration and pain for the vast majority of graduate students and postdocs who will not obtain faculty positions. This is, of course, true across a multitude of fields, not just physics, but I’ll stick to physics as it is the field I know more about.
Of course part of the problem is that the incentive system for faculty members is askew: you get rewarded for bringing in money which supports graduate students. So in some form, the number of graduate students you mentor is a proxy for a measure of your success as a faculty member. Indeed there is very little incentive for a professor at a research institute to not add even more graduate students to the meat factory of the academic job market.
Now there are many things we can think about to fix this situation, almost none of them will probably ever come to fruition, simply because there isn’t much incentive to do so from the “winners”, who are also the ones who would be responsible to fix the system. From my own perspective I’m a big advocate of science departments owning up to the problem and providing a setting where, while research and the academic system is the core of graduate school experience, departments do a lot more to emphasize the general applicability of the degree they are earning. Physics, in particular, suffers greatly from the attitude that only a faculty position at a top research school is acceptible, ignoring the huge amount of success that physicists have had departing from this path (and yes, I think about this path myself, nearly every day, especially when my research isn’t working the way I want it to. Of course this is probably the reason I’m in my current position.) Of course, I’m sure there are those who don’t think there is a problem at all. If you’re one of those people you might as well stop reading now, since there ain’t no way what I’m going to say next is going to do anything beside cause an increase in your blood pressure.
So back to the stupid idea. My position at the University of Washington is as a research assistant professor. What this means is that I am supported entirely by grant money I raise. Of course one particular side effect of this is that it is much harder for me to take on a lot of graduate students. So my stupid idea was what would happen if this setup was much more widely in place. What if faculty were rewarded a lot more for paying their own salary than they currently are? What if the proportion of research faculty was much more in line with the proportion of funding coming in to a university? What if the proportion of research faculty to faculty rewarded more for teaching was more in line with the actual source of funding dollars? This would definitely change the ability to fund graduate students at the level they are currently funded.
But, of course it is a stupid idea. Increasing the number of research professors would cause all sorts of havoc with teaching. And really, do I want more people to have to raise their salaries like I do and the suffer the slings and arrows of funding fortunes? Maybe what I really need is someone to comiserate with 🙂 But it is an interesting model to consider: what happens if a university acknowledges the central nature of research in its endeavors and tries to accomidate this by more research positions, more teaching-emphasis positions, and a reduction in the number of traditional tenure positions? I’m not sure I know the answer, but I’d be curous to know if there are any examples of schools which have pushed in this direction and what the (probably insane) consequences of such a move have been.

Mesoscopic Quantum Coherence Length in a 1D Spin Chain

Interesting experiment reported in Science, “Mesoscopic Phase Coherence in a Quantum Spin Fluid,” Xu et al (available here). The authors discuss a one dimenionsional spin chain where each site has a spin 1 system. This system is coupled antiferromagnetically to its nearest neighbor. Now such systems have a ground state whose two spin correlation, [tex]$langle S_i S_j rangle[/tex] decays exponentially as a function of the distance between site i and site j. However, if you examine the more complicated correlation function [tex]$langle S_i exp [ i pi sum_{i<k<j} S_k]S_j rangle[/tex] this tends to a constant as the distance between the two sites increases. Thus a more complicated order exists in this system, one which is not revealed by a simple two spin correlation function (In this traitorous world, nothing is true or false, all is according to the color of the crystal through which you look.) This order is known as a string order. In particular the ground state of the system is roughly a superposition over Neel states (over [tex]$S_z=pm 1$[/tex]) with [tex]$S_z=0$[/tex] inserted into these states. The amplitude of each of these states in the superposition is exponetially decreasing in the number of inserted $S_z=0$ states.
Okay, cool, so there is this nice model which has a very cool ground state whose order isn't in a two spin correlation but some other, more interesting order. But what is cool about this experiment is that the authors are able to examine the excitations in this system. In particular they examine the creation of a triplet pair excitation at rest and show that these propogate over a fairly large distance before losing their coherence (roughly fifty lattice units.) Indeed, if I am reading the article correctly, it seems that this coherence is limited only by the length of the chains themselves (at low temperature, at higher temperature thermal excitations can shorten this coherence length.) Cool! This, I think, should give hope to those who are interested in using spin chains for quantum computation, although, of course, TIALWFAQC (this is a long way from a quantum computer.)

Kielpinski Spaces

Reading Light by M. John Harrison, I ran across a neat reference to quantum computing.
So at the begining of the book a main character kills a guy and returns to a lab (of course, doesn’t everyone go to lab after they’ve murdered someone?) where they are working with “q-bits” [sic]. Then this choice line (p.6):

“We can slow down the rate at which the q-bits pick up phase. We’re actually doing better than Kielpinski there – I’ve had factors of four and up this week.”

Despite the Cornell spelling (it’s so cold in Cornell that David Mermin loses the “u”?), cool! Hopefully, many of you will recognize the reference to Dave Kielpinski who did some amazing ion trap quantum computing experiments at NIST and is now at Griffith in Australia. Okay cool, a reference to a real quantum computing researcher.
But it gets better! A few lines later:

Somewhere off in its parallel mazes, the Beowulf system begam modelling the decoherence-free subspaces – the Kielpinski space – of an ion pair…

Not only q-bits [sic] but also decoherence-free subspaces (no subsystems, alas)! And indeed this is a direct reference to papers Kielpinski was involved in: “A decoherence-free quantum memory using trapped ions,” D. Kielpinski, V. Meyer, M.A. Rowe, C.A. Sackett, W.M. Itano, C. Monroe, and D.J. Wineland, Science 291, 1013 (2001) and “Architecture for a large-scale ion-trap quantum computer,” D. Kielpinski, C.R. Monroe, and D.J. Wineland, Nature 417, 709 (2002). That former paper saved my butt during my thesis defense. An AMO physicist, about half way through my defense, said something like “Well this theory is all good, but what about in the real world.” My next slide was a plot yoinked from that first paper showing the first experiment which demonstrated slower decoherence in a decoherence-free subspace under ambient conditions.
And, dude, from now on I am totally calling the DFSs in ion traps “Kielpinski spaces.”

The Casinos Must Love This….Or Do They Hate It?

A New York Times article on computers playing poker heads up against humans opens with

For anyone stuck on a casino stool, playing hours of video poker, rest assured: humans can still beat a computer.

Um, well first of all, video poker is quite different from the heads up Texas Holdem the entire article is about. Significatly different from an artificial intelligence standpoint. Second, as far as I know, video poker machines have payoffs for an optimal strategy which are usually right around equal payoff and hourly wages you’d get from playing this strategy are pretty pathetic even for the machines with greater than even payoff. Since the majority of players are probably far from an optimal strategy, I’d guess video poker is quite the cash cow for the casinos.
But I wonder, when computers finally are able to beat humans at a poker (okay, some will say, “never!” I will say, you’re allowed to say “never” when you can run a thousand body simulation of a star cluster in your head. Reverse Turing Shazam!) whether this will actually hurt video poker machines. Hm, well seeing how they casinos still seem to rack in the mullah with their slot machines, probably not.

Random Paper

A new paper, “On the Generation of Random Numbers.” Postscript available here. What do you think should I submit it to the arXiv?