Toom's Rule, Thermodynamics, and Equilbrium of Histories

My recent post on thermodynamics and computation reminded me of a very nice article by Geoffrey Grinstein that I read a while back.
Suppose we are trying to store digital information into some macroscopic degree of freedom of some large system. Because we desire to store digital information, our system should have differing phases corresponding to the differing values of the information. For example, consider the Ising model in two or greater dimensions. In this case the macroscopic degree of freedom over which we wish to store our information is the total magentization. In order to store information, we desire that the magnetization come in, say, two phases, one corresponding to the system with positive total magnetization and the other corresponding to negative total magnetization.
Assume, now that the system is in thermal equilbrium. Suppose further that there are some other external variables for the system which you can adjust. For the example of the Ising model, one of these variables could be the applied external magnetic field. Since the system is in thermal equilbrium, each of the phases will have a free energy. Now, since we want our information to be stored in some sort of robust manner, we don’t want either of the phases to have a lower free energy, since if it did, the system would always revert to the phase with the lowest free energy and this would destroy our stored information. Since we require the free energy of all information storing phases to be equal, this means that we can always solve these equality equations for some of the external variables. This means that if we plot out the phases as a function of the external variables, we will always end up with coexisting phases along surfaces of dimension less than the number of external variables. For our example of the Ising model in an external magnetic field, what happens is that the two phases (positive and negative total magnetization) only coexist where the magnetic field equals zero. If you have any magnetic field in the positive magnetic direction, then the thermodynamical phase which exists in equibrium is the phase with the postivie total magnetization. So coexistence of phases, and in particular of information storing phases, in the external variable space, is always given by a surface of dimension less than the number of external variables
What is interesting, and why this gets connected with my previous post, is Toom’s rule. Toom’s rule is two dimensional cellular automata rule which exhibits some very interesting properties. Imgaine that you have a two dimensional square lattice of sites with classical spins (i.e. +1 and -1) on each of the lattice sites. Toom’s rule says that the next state of one of these spins is specified by the state of the spin, its neighbor to the north, and its neighbor to the east. The rule is that the new state is the majority vote of these three spins (i.e. if the site has spin +1, north has spin -1, and east has spin -1, the new state will be spin -1.)
Toom’s rule is interesting because it exhibits robustness to noise. Suppose that at each time step, the cellular automata instead of performing the correct update, with some probability the site gets randomized. What Toom was able to show was that for the Toom update rule, if this probability of noise is small enough, then if we start our system with a positive magnetization (just like the Ising model, we define this as the sum of all the spin values) then our system will remain with a postive magnetization and if we start our system with a negative magnetization it will similarly retain its magnetization. Thus Toom showed that the cellular automata can serve, like the two dimensional Ising model at zero applied field, as a robust memory.
But what is nice about Toom’s rule is that it gives an even stronger form of robustness. Remember I said that the noise model was to randomize a single site. Here I meant that the site is restored to the +1 state with 50% probability and the -1 state with 50% probability. But what if there is a bias in this restoration. From the Ising model point of view, this actually corresponds to applying an external magnetic field. And here is what is interesting: for Toom’s rule the region where the two phases which store information can coexist is not just at the (effectively) external magnetic field equal zero point, but instead is a region of external magnetic field between some positive and negative value (set by the probability of noise.) So it seems that Toom’s rule violates the laws of thermodynamics!
The solution to this problem is to realize that the probability distribution produced by Toom’s rule is not given by a thermodynamic Boltzman distribution! Toom’s rule is an example of a probabilistic cellular automata whose steady state is not described by classical thermodynamics. This is exactly one of the models I have in mind when arguing that I do not know whether the eventual state of the universe is going to be in Gibbs-Boltzman thermodynamic equibrium.
Along these lines, Charlie Bennett and Geoffrey Grinstein, have, however, shown that while the steady state of Toom’s rule is not given by a Gibbs-Boltzman thermodyanmic distribution, if one considers the histories of the state of the cellular automata, instead of the state itself, then Toom’s rule is given by a Boltzman distribution over the histories of the cellular automata. It’s at this point that my brain just sort of explodes. That a system’s histories are in equibrium is very strange: normally we think about equibria being generated in time, but here we’ve already used up our time variable! I suspect that the answer to this puzzle can be achieved by refering to the Jaynes approach to entropy, but I’ve never seen this done.

This entry was posted in Computer Science, Physics. Bookmark the permalink.

7 Responses to Toom's Rule, Thermodynamics, and Equilbrium of Histories

  1. Joe says:

    This last, brain-exploding bit would seem to be just an identity. Starting with the probability of the history, i.e. joint prob. of CA state at each time instant p(x_0, x_1,…), we can break this down into a product of transition probabilities p(x_0)p(x_1|x_0)p(x_2|x_1)… since the dynamics are Markovian. Now take log of this, change the log of a product of factors into the sum of a bunch of log terms, and exponentiate (is that a real word?) to get back to where we started. Presto! Boltzmann distribution. But it’s just an identity since the Hamiltonian is the log of the probability.
    It doesn’t make sense to try to think of it in the Jaynes sense of “we know the average value of some quantity; what’s the distribution with the highest entropy under this constraint?” as the quantity of interest (Hamiltonian) already explicitly contains the probability. Taking the average of the Hamiltonian in this case gives the entropy.
    Did I miss something? I’m looking at the last reference you make, to the PRL in 85.

  2. Dave Bacon says:

    Hey Joe, I’m not sure what you mean by “just and identity.” Certainly you’ve described the procedure correctly, but I guess what I have a hard time wrapping my head around is a distribution over histories. When I think about thermodynamics, I think about a system which starts out of thermodynamic equilibrium and then over time, relaxes to equilibrium. What would a similar process be for histories? A process which starts with a non-equilbrium history and then there is some dyanmics on the histories to relax it to equibrium? A process ON histories? That sort of blows my mind away because I’ve still got Newtonian clockwork, general relativy ADM, and S-matrix quantum ideas about “evolution in time.”

  3. Joe says:

    I don’t think the distribution given is really an equilibrium distribution in the sense you describe. It’s a Boltzmann distribution due to the exp[-H] form, but you can always write any probability distribution in this form, provided you choose H to be -log p.
    Now, I’m not up on the precise meaning of equilibrium distribution, but it seems that we need a distribution which is stationary under dynamics and has maximum entropy given the constraints of the intial distribution. It doesn’t seem like this case qualifies. Anyway, the point I’m trying to make is that the Boltzmann distrib is a concept from probability theory; just a type of distribution. Equilibrium is a concept from thermo, and there’s more to it (which doesn’t apply in this case) than just “the distribution is Boltzmann.”

  4. Dave Bacon says:

    I guess the point that I’m missing in my post is that the assymptotic distribution of Toom’s rule isn’t a Boltzman distribution for a nice spiffy Hamiltonian.

  5. John Baez says:

    Dave Bacon writes:

    I guess the point that I’m missing in my post is that the asymptotic distribution of Toom’s rule isn’t a Boltzman distribution for a nice spiffy Hamiltonian.

    In ordinary classical (resp. quantum) mechanics, time evolution is generated by the Hamiltonian H in a well-known way, via Hamilton’s (resp. Schroedinger’s) equations. Surely this has something to do with why thermal equilibrium for such systems is given by Boltzmann’s rule, where the probability of occupying a state with H = E is proportional to
    exp(-kE/T)
    where T is the temperature and k is Boltzmann’s constant.
    Since time evolution for cellular automata is typically not given by any equation involving a Hamilton, I wouldn’t expect any rule like this to hold for cellular automata.
    (Some cellular automata can be described using Hamiltons, as shown by my student James Gilliam. But, he never studied the statistical mechanics of these rules.)

  6. John Baez says:

    Grr… the last two appearances of “Hamilton” in my post should have been “Hamiltonian”.
    Nice blog, by the way!

  7. Dave Bacon says:

    Cool. Thanks for the reference John. It would be fun to look at the statistical mechanics of the rules.
    There used to be a time when every time I would search for something research related on the web, and all I would come up with was your “This Week’s Finds…” Nowdays whenever I search I end up getting my blog posts which is kind of annoying (cus I certainly know that those posts don’t contain the answer!)
    P.S. your “Quantum quandaries: a category-theoretic perspective” is one of my favorite papers and I’m always trying to get people in quantum information science to read it.

Leave a Reply

Your email address will not be published. Required fields are marked *