Time After Time

Ole Peters was a postdoc at the Santa Fe Institute during the time I was also a postdoc there. In addition to being a world class windsurfer, Ole likes to think about critical phenomena and stochastic processes. And in the TEDxGoodenoughCollege talk below he almost convinces me that I need to think harder about ensemble versus time averages 🙂

This entry was posted in Economics, Mathematics. Bookmark the permalink.

8 Responses to Time After Time

  1. Frank says:

    I think that the example that he presents in the talk is not fair; to model his game as a stochastic process, he should take the logarithm of the “wealth” function (such that this becomes the sum of independent variables), and then compare the time-average with the ensemble average.

  2. dabacon says:

    Frank: Define fair 🙂

  3. dabacon says:

    Robin: let me play the devils advocate (since I agree with much of what you and Frank say). If you have to adjust the quantity you are looking at to make the ensemble equal the time average, then doesn’t it feel like in more complicated and realistic situations that you have to be really careful about mixing these two ideas?

  4. Robin says:

    I’ll rephrase Frank’s comment. If you consider the ensemble-average of log(wealth), then you find that the time average and the ensemble average are consistent. That is, both indicate that it’s a losing game.
    This suggests the question “Why log? Why not sqrt, or whatever?” Good question. The best answer is that this game is (like the St. Petersburg paradox) one with very long tails that make the mean an unreliable statistic. The median, however, is a reliable statistic. In particular, and unlike the mean, it’s invariant under functional transformations. And in this case, the median tracks the mean of the log.
    Ole’s talk is interesting and provocative. But, since as a scientist I feel a certain duty to be skeptical, I’d like to suggest that much of what he say and shows can be boiled down to: “When faced with an ensemble, consider the median [instead of / as well as] the mean.” Which is eminently sensible, but not especially exciting.

  5. Ole says:

    Thank you for your great comments. All valid points, but let me respond briefly.
    Frank: if you take the ensemble-average rate of change of the logarithm of wealth, you’re right that it reproduces the time-average exponential growth rate of wealth. Why? Silly question, I know, but here’s an interesting perspective. Ergodicity in this context can be viewed as the question whether two limits commute (sample size to infty and time to infty). You can write the logarithm as the limit ln(x)= lim_{nto infty} n(x^{1/n}-1). This limit turns out to be equivalent in the calculation to the limit time to infty. So taking the ensemble average of the logarithm means you are taking an ensemble average of a time average. Because the noise has already been killed by the time average (implicit in the logarithm), all ensemble members are identical, and you end up with the time average (the inner limit).
    Robin: good question indeed. The logarithm is a very special function, in this case it encodes the time average in an ensemble average — wow! Messing with the logarithm (like using a sqrt instead) produces results that are very difficult to interpret physically. In the St. Petersburg paradox the special role of the logarithm has been underestimated (Bernoulli wrote that the sqrt is just as good). If you’re interested, have a look here: http://arxiv.org/abs/1011.4404
    In statistical mechanics the logarithm ensures entropic extensivity (and a few other properties). Messing with it has been done much more carefully there than in economics (see e.g. Hanel, Thurner, Gell-Mann: http://www.pnas.org/cgi/doi/10.1073/pnas.1103539108), although the precise physical interpretation of, e.g., entropies with generalized logarithms poses a problem there too.
    The median will eventually (as time to infty) reflect the time average in our game, true, and I tend to agree with you that it’s a more meaningful statistic. But it really all depends on what you want — maybe you are interested in the ensemble average. What if you’re the US government and you have 300 million individuals whose “ensemble”-average earnings determine your taxes? But the time-average growth reflects how the typical individual is getting on…
    Dave: Thanks for letting me know about the posts. I agree, this is about thinking carefully both about what it is you want your mathematical measure to reflect and about implementing an appropriate measure in a given situation.

  6. Robin says:

    Dave: I agree completely. In fact, I’ll say something stronger — there are lots and lots of situations where time and ensemble averages are totally different. Let’s not make too big a straw man out of the ergodic hypothesis. According to Wikipedia, it says that “over long periods of time… all accessible microstates are equiprobable”. This is pretty specific to physics, and even then it’s specific to certain systems. (Nobody has ever argued to me that harmonic oscillators are ergodic). I wasn’t aware that ergodicity was taken for granted outside of that realm.
    Regarding your first point (about adjusting the quantity): this is the problem with the mean. The mean of f(x) is never equal to f(<x>). So it’s not really a matter of “adjusting” the quantity — there’s a more fundamental problem of “What f(x) do you pick in the first place?” Counterintuitive though it is, there’s nothing sacred about f(x)=x. For instance, the utility of money isn’t linear. So the mean value of wealth isn’t very meaningful. There’s no consensus on how its utility does scale, so it’s hard to justify the mean of any f($).
    Which is precisely why I’m [somewhat] happier with the median — precisely because it doesn’t vary that way. The median of f(x) is f() of the median of x.

  7. Robin says:

    Ole: I’ve just read 1011.4404, and I commend you on a very clear and thoughtful paper. I can’t respond comprehensively in this space — perhaps we ought to chat over a pint someday. But, while I can’t respond to all the trenchant and interesting points in the paper, I do want to comment on the central argument.
    While I agree with all of your mathematics — which, in turn, agree with Bernoulli’s — I fear I remain unconvinced by the idea that time averaging is the central concept. I believe that you’ve actually built your argument upon the very concept that you reject — the use of logarithmic money.
    Not, I hasten to emphasize, because of any assumptions about utility. Rather, right around Eq. (5.1), you make the key step of examining the multiplicative factor — the ratio of post-gamble wealth to pre-gamble wealth. And then, very sensibly, you consider its logarithm.
    This is indeed the right thing to do. But you have just made exactly the same logarithmic transformation that Bernoulli did. Now, I agree that Bernoulli’s argument was specious — the utility of money is not necessarily logarithmic. The correct reason for treating money logarithmically is that — in games of Kelly type, including St. Petersburg — money increases or decreases geometrically. Its logarithm therefore undergoes a linear random walk.
    This statement does not hold for any other function of money. And it is precisely because of this linear random-walk behavior that all sorts of things work out nicely. It is linearity that ensures that rates of change at different times are independent (which gives your central result), and therefore that the ensemble and time averages of log r are equal. And it is the random-walk behavior (which follows from linearity) that makes the distribution of log($) at any given time a binomial distribution, and therefore ensures that the median equals the mean.
    In summary, I feel that the focus on time average rather than ensemble average is something of a red herring — and that the essential concept is, instead, a [logarithmic] transformation of the main variable, whose necessity is derived not from any notion of utility, but rather from the dynamical map defining the game.

  8. Ole says:

    Robin: Thank you for your kind words about my paper. I’m glad you enjoyed it. Yes, probably time for a pint.
    If one insists on the notion of utility, a nonlinear value of money (for linear f(x) we do have langle f(x)rangle = f(langle x rangle)), then the arguments in http://arxiv.org/abs/1011.4404 can be construed as arguments for the specific form of logarithmic utility. But my perspective is that we shouldn’t even start talking about utility, we shouldn’t introduce some value function, before we run out of objective methods, preferably rooted in the laws of physics. In the St. Petersburg paradox, we only have to invoke time, so let’s not conflate physics (time) with psychology (utility).
    You’re absolutely right that it’s key that the dynamics of wealth are multiplicative. To take a time average we must have a dynamic — there is no dynamic specified in Bernoulli’s game, it just sits there in a vacuum, so you could argue the problem is not well-posed. It’s reminiscent of equilibrium vs. non-equilibrium statistical mechanics: in computer simulations of an Ising model in equilibrium, the dynamic is irrelevant as long as it leads to a sampling of the phase space consistent with the equilibrium weights (Boltzmann factors) — you can use Metropolis or Swendsen-Wang or… But if you’re interested in non-equilibrium behavior (relaxation, nucleation etc.), where time is more meaningful, then the specific dynamic is crucial. Also here, I think it’s fair to say that we’ve only just started in the last few decades to understand the significance of time (or dynamics, or non-equilibrium).
    The “Ergodic hypothesis” Wikipedia entry seems to refer mostly to the origin of the concept, namely Boltzmann’s microcanonical ensemble, with only energy conservation, the “Ergodic theory” entry seems more inclusive. Since Boltzmann there have been major developments, most relevant to our context the development of ergodic theory for stochastic processes. This literature gets quite mathematical quite fast. Chapter 9 in “Probability and Random Processes” by Grimmett and Stirzaker gives a broad idea, and the first chapters in “Ergodicity for Infinite Dimensional Systems” by Da Prato and Zabcyk are almost penetrable.
    Everything you say makes sense, we just have slightly different perspectives. Personally I’ve gained clarity from mine, reflected in a number of further results and predictions that I wouldn’t have arrived at otherwise. Happy to continue the discussion, but perhaps offline.

Leave a Reply

Your email address will not be published. Required fields are marked *