Nitpicker's Paradiso: Paul Davies Anthropic Edition

Paul Davies essay in the New York Times on “Taking Science on Faith” is sure to raise some hackles from the science community. Me, I’d just like to point out how silly some of Davies arguments specifics are. Yes, its another edition of “Nitpickers Paradiso.”

Davies begins with a mantra yelled by theists ever since science began getting things right and removing the need for supernatural explanation (here done valley girl style): “But, like, what you’re doing must be taken, like, on faith because, like, why do you have, like, faith in science? Wah! Wah!” But lets skip ahead and not deal with the substance of Davies argument (which I find rather unconvincing, and in portions downright deceitful in its use of the vagaries of language: certainly his use of the word “faith” differs markedly from my evangelical friends use of the word as does his use of the word “science” from that used by my scientific friends) and instead find someplace where we can really nitpick Davies to death.
Ah, here we go

Part of the reason is the growing acceptance that the emergence of life in the universe, and hence the existence of observers like ourselves, depends rather sensitively on the form of the laws. If the laws of physics were just any old ragbag of rules, life would almost certainly not exist.

This is exactly the kind of baloney that makes me scream every time I see a talk on the anthropic principle (Stephen Hawkings makes me cry!)
First of all sentences like the above show the kind of lack of quantification that is the hallmark of bad scientific philosophizing everywhere. I mean what does it mean “sensitively on the form of the laws?” Does he mean the value of the coupling constants in the standard model? And what evidence is there that if, say, the fine structure constant were different by one part in ten trillion would we not exist as we currently do? Okay, so you say, the anthropic principle will just say that the parameters need to be in a certain range. But notice what Davies is saying. He’s saying that “the form of the laws” is important. Worse than assuming a final theory, which we don’t currently have, Davies has assumed that there is some notion of the the laws of physics over which one can talk about the parameters being just right.
But do we really know that quantum field theory, or string theory, or loop quantum gravity is the correct “form” of the laws of physics. No. The space of physical laws is a lot bigger than just the space of possible Lagrangians for the standard model. And I’m pretty sure that the space of all laws of physics is a might bit more problematic to pen down, parameterize, and say that our universe is the way it is because it if weren’t that way we won’t exist. Since we don’t have a “final” physical theory, it makes little sense to talk about how it is fine tuned (like fine tuning next years Lexus before it has been built.) Anthropic reasoning (of the type Davies employs) shows the kind of uncreative thinking about possible physical theories which I personally find deeply authoritarian and drives me bonkers. If however, Davies would like to show me some research showing a parameterization of the possible laws of physics which goes beyond simple parameters in a string theory or a quantum field theory Lagrangian, I’d be happy to hear the research and to learn why it is that quantum field theory is such an effect description of low energy physics.

19 Replies to “Nitpicker's Paradiso: Paul Davies Anthropic Edition”

  1. As I said thread over at denialism, I don’t quite understand why it’s called “the anthropic principle” anyway. Even if it weren’t premature to state that “if the laws of physics were different, we wouldn’t be here to observe them”, there are all kinds of contingencies beyond basic physical constants that our existence depends upon. The biotic history of earth could’ve changed at any point and yeilded a state of affairs without us. There is also the questionable nature of the adaptive value of so-called “higher intelligence” in the first place. These are two things I can think of off the top of my head, but I think you get the point. If one wants to argue that our particular cosmological trajectory is specifically geared toward humans then they have far more hurdles to overcome than those found in physics.

  2. What a mess. I’m most amused by his claim that the laws of physics are independent and unrelated to the “laws of the universe.”

  3. “The space of physical laws is a lot bigger than just the space of possible Lagrangians for the standard model….”
    “parameterization of the possible laws of physics”
    I’ve never seen a satisfactory description of the space of all possible physical laws. Tegmark waves his hands at it.
    What is the dimension of the space of all possible physical laws? What is its topology? Feynman used to discuss with me his worry that we’d be unable to tell if the universe had an infinite number of fundamental physical laws, even if some only emerge at very high energy or very weird boundary conditions.
    Davies has a cute handwaving rant about how what we think are physical laws are merely “local bylaws” — but he steadfastly fails to clarify this, let alone quantify it.
    What would it mean to say that laws change over time? Or between one universe and another in the multiverse?
    What are the operators that map from one physical law to another? Are those meta-laws of physics, or laws of metaphysics? And where do they come from? Is there a fixed point in all this, or is it turtles all the way down?
    Davies is an entertaining writer and peaker, but I thinbk he’s doing something worse than mutating the Strong Anthropic Principle. He’s inventing an intentionally vague Meta-Anthropic Meta-Principal.
    I’d be delighted to be proved wrong. For that matter, I’d be happy to win one of those Templeton prizes myself — I certainly spout enough Theophysics and Theomathematics. Maybe I just need more of it in arXiv and bestsellers. Paul Davies shows us the path.

  4. Congratulations! You just cost Paul Davies a sale–I was about to buy Cosmic Jackpot, but after this I don’t think so.

  5. I am a biologist and not a physicist, so I will grant Davies and other anthropic principle adherents that if the physical laws were just a little different, the universe would be very different and we would not be here.
    However, suppose that if instead of a universe made up of quarks and gluons, the laws led to a universe of squirminos and blakions. Isn’t it likely that somewhere in that universe there would be sentient creatures marveling at how the universe appeared to be specially created for them?

  6. There is a precise way of talking about different laws of nature, and that is via C* algebras (admittedly not the most general framework for the laws of physics, but it is a start). The Bub-Clifton-Halvorson theorem is derived this way (the one that says that any theory that does not permit cloning, superluminal signalling, and bit commitment must closely resemble the laws of quantum mechanics).
    And as Einstein was discussing with me the other day, it is perfectly fine to use the anthropic principle to explain why we live on Earth, rather than on some other planet. It is because we can safely assume that there are numerous other planets out there, and our little rock happened to be the one (or one of the few) where life could evolve into us. To apply this to the universe one has to assume that there are a multitude of universes out there. The string landscape notwithstanding, this is much less certain that the multitude of planets.
    Finally, I do have a soft spot for Davies, because he had the (for me) inspiring idea that the laws of physics themselves may not be infinite-precision statements, but also subject to uncertainty. This is related to Lloyd’s total information content of the universe, which increases over time. Davies muses that the laws of physics may have become more precise as time marched on, because there is more information in the universe now than there was before.
    Oh, and he wrote QFT in curved spacetime, which is a brilliant book…

  7. However, suppose that if instead of a universe made up of quarks and gluons, the laws led to a universe of squirminos and blakions. Isn’t it likely that somewhere in that universe there would be sentient creatures marveling at how the universe appeared to be specially created for them?
    In his book, at least, he specifically applies the anthropic principle to “life as we know it.” I agree this seems really really weasly. Because I suspsect that there are many possible laws of physics which lead to life, I would guess that the anthropic principle applies in all of those possible worlds. Thus its explanatory power seems to me to, umm, weak.

  8. There is a precise way of talking about different laws of nature, and that is via C* algebras (admittedly not the most general framework for the laws of physics, but it is a start).
    A good start, but anything that starts with C* algebras, well doesn’t, that seem a little narrow minded 🙂 If past history is any guide (and it isn’t really) then the math we use to describe the future physical theories may look very little like the math of the future theory.

  9. Don’t forget The Brontothropic Cosmological Principle ,
    which I learned about from a comment by Tjallen
    on Brad DeLong’s Semi-Daily Journal: (as edited by B. DeLong:)
    About [130] million years ago, there was the Brontothropic principle….
    The fundamental constants of the universe must be such to allow the Brontosaurus to live and thrive
    [No wait!–T]hey’re gone…

    [Tjallen continues:]

    So the universe made with these fundamental constants is one that did NOT support us for more than a flicker of time. If only the constants chosen were slightly different, humans might have survived… Why didn’t the maker of the universe choose the constants that keep humans alive? Oh, he looks like a roach instead?
    [A: Tjallen seems to have found the perfect answer to the Anthropic Principle nonsense, more honor to him.]

  10. As I said thread over at denialism, I don’t quite understand why it’s called “the anthropic principle”
    Because there is a most natural expectation for what the universe should look like that is not observed. What is observed instead has carbon-oriented features that indicate that the elusive “dynamical structure principle” is biocentric in nature. This is also supported by the direct but willfully ignored, observational evidence:
    http://cerncourier.com/cws/article/cern/29210;jsessionid=EF669C8E1BCD4DCF4FE6D7A5D262C11A
    http://www.answers.com/topic/copernican-principle
    Please click-on my link and goto my blog. Find the “Goldilocks Enigma” or “A Very Strong Anthropic Principle” in ordder to reply to me because I do not want to get into a fight with a bunch of dogmatic anticentrists, (like I just did at Pharanfoolya, who just *know* that they’ve got it all figured out, but aren’t required to support anything that they say.

  11. Wisdom from Uncle Carl:

    There is something stunningly narrow about how the Anthropic Principle is phrased. Yes, only certain laws and constants of nature are consistent with our kind of life. But essentially the same laws and constants are required to make a rock. So why not talk about a Universe designed so rocks could one day come to be, and strong and weak Lithic Principles? If stones could philosophize, I imagine Lithic Principles would be at the intellectual frontiers.
    (Pale Blue Dot)

    Furthermore, I suspect that Lithic Principles would be great favorites among the philosopher-stones who retain their childhood affection for the great god Volcano, but want to distance themselves from the crudities of Intelligent Sedimentation.

  12. I’m on the same page. To me, the “fine tuning” argument has always seemed more aptly called “the lack of imagination” argument, which is ironic given that its proponents seem to think they are being especially open-minded.
    Simply put, once you move outside of observed physical laws and start asking “why is anything the way it is” you’ve moved into a realm of near infinite possibility, to which a mere variation of known constants is really just the tiniest of conceivable spaces.
    We do not know what is “possible” for universes (and no one is talking about multiverses here: just what’s “likely” for the way the one turned out). We do not know if the constants can vary, and if so, by how much. But that’s only the start of it. For all we know, our system of constants is remarkably limited and paltry compared to what is it’s most likely for a universe to have. For all we know, in the set of possible universes, ours is one of the most unlikely in its incredible HOSTILITY to life, not it’s fine tuning for life. Any argument that ultimately might as well suggest an intelligent RUINER as an intelligent designer is not a very productive or enlightening one.

  13. Several good points have been made. PK was right to bring in C* algebras, which von Neumann did invent to generalize Quantum Mechanics, and in which tremendous gains have been made in the past decade or two, known mostly to mathematicians and not trickling down to Physics as it is practiced. I could say more, but this posting will be long enough already.
    Tex, Tjallen, and Blake are right to embed the Antrhropic Principle in a higher-order manifold which has the Brontothropic Cosmological Principle and the Lithic. I wrote a science fiction novel manuscript about the cosmic war between our descendants and intelligent worms, where each is trying to break symmetry and produce a 5th force, which is either useful to people, or better upholds the Vermic Principle.
    PK can now talk to Einstein the same way that I can now talk to Feynman, or anyone else dead but still alive in the past nappe of our light cone. But QM is only part of the ediface of our model of phyaical laws. There is also General Relativity, and Quantum Field Theory (including Quantum Electrodynamics, Quantum Chromodynamics, various approaches to explaining the Standard Model). Unfortunately, these are incompatible.
    And the question is open: how “high level” (“Level 5” or above in the below summary) is the Math needed to represent all the viable theories, and the ones that in some sense “could be true” in alternative universes within ther multiverse? And what Math allows us to represent meta-laws under which laws of Physics can change? John Baez is a leader in the restructuring of Physics through n-Category Theory. Is the universe “really” based on Catregories, n-Categories, or the familar Set Theory and Hilbert Space underpinnings?
    Terry Tao summarizes Richard Borcherds talk “What is a quantum field theory?” at the April 2007 Fields Medalist Symposium:
    Richard is best known for his work in lattices and group theory, most notably in explaining the monstrous moonshine phenomenon, but in recent years he has moved to a completely different area of mathematics, namely mathematical quantum field theory (QFT), which Richard did a very admirable job of explaining. He began by contrasting the very different perspectives of mathematicians and physicists to the subject; from the mathematical side of things, he mentioned the various axiomatic formulations proposed for QFT (Wightman axioms, Haag-Kastler axioms, Ostewalder-Schrader axioms, etc.), but then mentioned the main difficulty with these formulations, namely that none of the major interacting four-dimensional spacetime QFTs (QED, QCD, standard model, etc.) are known to obey any of these axioms. (The free QFTs obey the axioms, as well as many two-dimensional and a few three-dimensional ones.) On the physical side, the emphasis is more on computing the Green’s function for a QFT, which formally can be expressed as a Feynman path integral, which in turn is formally expandable as an infinite sum (essentially a Dyson series) over Feynman diagrams of various finite-dimensional integrals; these sums are often horribly divergent, but nevertheless by means of various tricks of varying levels of mathematical rigour, physicists have been able to compute at least the first few terms of these sums and get some predictions which are in extraordinary agreement with experimental data. Most of Richard’s talk was on explaining how the mathematical and physical viewpoints could (hopefully) be reconciled.
    Richard gave us a very interesting and useful “complexity hierarchy” to view the various spaces in both classical and quantum field theory, using things like the symmetric algebra construction V \mapsto S(V) to go from one level to the next (thus one can view spaces in level n+1 as consisting of some sort of “polynomials” of objects in a level n space). According to Richard, one of the main reasons why QFT is conceptually difficult is that it routinely uses spaces which are very high up in the hierarchy. For example, in a classical field theory (CFT), ignoring all analytic questions of convergence, differentiability, integrability, etc.,
    * “Level 0” spaces are finite-dimensional spaces such as the spacetime M, the gauge group G, the principal vector bundle B over M, and so forth. (For instance, in a scalar field theory, G is the real line, and B is just M \times {\Bbb R}.) Classical fields \phi are then just sections of these bundles; for instance, a scalar field is just a map \phi: M \to {\Bbb R}. The jet bundles of B also qualify as Level 0 spaces.
    * “Level 1” spaces include things like the space of differential operators on M (or on the bundle B), which can be viewed as polynomials over the “Level 0” vector fields \partial_i. For instance, the d’Lambertian \nabla^\alpha \partial_\alpha would belong to this Level 1 space. A little more generally, the Poisson algebra of a jet bundle is a Level 1 space.
    * “Level 2” spaces include the space of polynomial combinations of objects from Level 1 spaces applied to a classical field \phi. In particular the space {\mathcal L} of Lagrangian densities, of which L(\phi) := \partial^\alpha \phi \partial_\alpha \phi + m^2 \phi^2 + \lambda^4 \phi^4 is a typical example, is a Level 2 object. This is the level where standard explanations of classical field theory usually stop; the theory asserts that classical fields must be critical points for the associated action S(\phi) := \int_M L(\phi), and that is an adequate description of the theory. But one can continue onward:
    * “Level 3” spaces include the Poisson algebra generated by {\mathcal L}, which contains such objects as the Poisson bracket \{ S_1, S_2 \} between two actions. This algebra is implicit in things such as Noether’s theorem, but is usually not discussed explicitly. Using the Poisson bracket structure, elements of this Level 3 space can be viewed as “vector fields” or “flows” on the space of all fields in the classical field theory; in particular, infinitesimal symmetries live in a Level 3 space.
    * “Level 4” spaces include the universal enveloping algebra of the previously mentioned Poisson algebra (which is of course a Lie algebra). This is where “differential operators” on the space of all fields will live. I think also that canonical transformations (such as those given by non-infinitesimal symmetries, e.g. spatial translation by a non-zero distance) are also supposed to (formally) lie in a Level 4 space, though I am a bit uncertain on this point.
    So while CFTs mostly top out at Level 2, QFTs seem to really require all levels up to Level 4:
    * “Level 0” spaces of a QFT are much the same as those of the associated CFT: the spacetime, the bundle, etc. The quantum fields \phi(x) are no longer sections of the bundle, though, but should be interpreted for each x as (nastily singular and unbounded) operators on some abstract Hilbert space (it seems to be unprofitable to try to make this space concrete until much later in the theory). (Incidentally, these quantum fields are not the wave function |\psi\rangle that one is used to from the Schrodinger formulation of non-relativistic quantum mechanics, but instead represent the (spacetime) position operators from the Heisenberg formulation.)
    * “Level 1” spaces again include the space of differential operators, but now acting on quantum fields rather than classical fields. (There is of course the usual problem that these operators might be unbounded and thus only be densely defined on the Hilbert space of interest, but there are standard ways to deal with these difficulties.)
    * “Level 2” spaces again include the space {\mathcal L} of all Lagrangians, are polynomials that convert a quantum field \phi(x) to another (formally) operator-valued function of space time.
    * “Level 3” spaces include the space of all Feynman path integrals, e.g. \int \phi^4(x) \phi^3(y) e^{i \int_M L(\phi)} D\phi. In particular they include Green’s functions.
    * “Level 4” spaces include the space of generalised Wightman distributions, which include things like \langle \emptyset | \phi(f_1) \ldots \phi(f_n) | \emptyset \rangle where |\emptyset \rangle is the vacuum state and f_1,\ldots,f_n are various bump functions in spacetime, but also include more general objects in which the \phi(f_1) \ldots \phi(f_n) factors are replaced by any other time-ordered operators, such as those coming from the Level 3 space. (I admit I didn’t understand this point very well.) Apparently, the space of all renormalisations is also a Level 4 space.
    Richard then talked about the various attempts to build a QFT starting from the Lagrangian as the foundational object. He mentioned Dirac’s philosophy of building the Lie algebra structures first, and only worrying about exactly what the Hilbert space H was at a very late stage of the theory; indeed, trying to apply standard “prequantisation” methods such as proposing L^2(M) as the Hilbert space seemed to run into fundamental difficulties (e.g. the action of the center was wrong). There was some fix to this involving the choice of a “polarisation”, but this seemed somewhat ad hoc and didn’t seem to work in all cases (I didn’t follow this bit well). [Incidentally, Richard made a cute observation, which was that the theory becomes a little cleaner notationally when Hilbert spaces were not viewed as complex vector spaces, but rather as complex bimodules, with the complex numbers acting in the usual linear manner on the left but in an antilinear manner on the right, vz = \overline{z} v. More generally, operators should act on vectors on the left in the usual manner but on the right by the adjoint operator. This ends up reconciling the “mathematical” and “physical” notation in the subject quite nicely.]

  14. I strongly agree with Dave Bacon, that what we must consider is Life As We Do Not Know It, and Laws of Physics As We Do Not Know the Math, yet. C* algebras and Category Theory may not be the way forward, forever.
    Yes, John von Neumann 80 years ago axiomatized Hilbert space, proved the commutant theorem for strong operator closed self-adjoint algebras of bounded linear operators acting on that space, nicely timed to be almost simultaneous with Heisenberg’s matrix mechanics and the equivalent Schrodinger wave mechanics, as seen in his famous monograph “Mathematical Foundations of Quantum Mechnics.”
    But there is a limit to generalizing this. Fifteen years later Gelfand and Naimark showed that, while a von Neumann algebra
    (abstractly as a *-algebra) has essentially a unique Hilbert space representation (i.e. canonical, multiplicity 1, any other
    representation determined just by multiplicity), a self-adjoint algebra of Hilbert space operators only assumed to be norm closed (now called concrete C*-algebra) in general has many very different Hilbert space representations. That cannot happen when there are a finite number of degrees of freedom in a quantum mechanical system.
    Then around when I was born at the start of the 1950s, Garding and Wightman showed that the C*-algebra with infinitely many degrees of freedom has infinitely many inequivalent irreducible representations.
    The classification questions this raised were mostly solved by Connes et al, and George Elliott’s generalization of Glimm’s 1959 classification of infinite tensor products of matrix algebras, to AF
    (approximately finite) algebras, and deep connections to K-theory, Connes-Takesaki flow of weights, Jones knot polynomials, Feynman diagram renormalization, the Wigner semi-circle law, quantum groups, and other stuff somewhat beyond my grasp.
    Is our universe noncommutative? We could hardly even ask the question, were it not for von Neumann and the other giants mentioned above.
    Good summary of all the above in the lead article of FieldNotes, Vol.8, Sep 2007.
    I’m still wrestling with a paper I’ve gotten to writing almost 100 pages (too long) on *-algebras for Vasiliev’s “Imaginary Logics.” The issue to Vasiliev was universes where not only physical law might be different, but Logic itself. How does a being in our Universe 1 with Logic 1 talk coherently about a being in Universe 2 with Logic 2, or, as he points out, a being in Universe 2 who claims to be operating in Logic 2, but is really operating in Logic 3? And what if Universe 2 is an imaginary universe, a fictional universe, perhaps that of Oz, or Shakespeare’s Midsummer Night’s Dream, or Charles Dodgson’s Alice in Wonderland? Cervantes may not have been the first to suggest that we may be characters in someone else’s book, nor was Borges the last.
    What is the space of all possible Science Fiction about universes with different Physical Laws?
    As Lawrence Krauss writes, in the reaction at Edge to Davies’ Op Ed piece:
    Einstein once said that what most interested him about the Universe was whether God had any choice in its creation. He was, of course, not referring to a deity here, but rather asking the very important question: Can there only be one set of physical laws that allow for a consistent physical universe, or are there many possibilities?
    This is precisely the question that Paul Davies suggests scientists do not generally ask, and moreover it demonstrates profound differences between the ‘faith’ of scientists, and religious faith. It is true that there is no purpose in carrying out scientific investigations if we are to believe that the laws of nature are capricious and can change from day to day in unpredictable ways.
    “Dieu a choisi celuy qui est… le plus simple en hypotheses et le
    plus riche en phenomenes”
    [God has chosen that which is the most simple in hypotheses and the most rich in phenomena]
    “Mais quand une regle est fort composée, ce qui luy est conforme, passe pour irrégulier”
    [But when a rule is extremely complex, that which conforms to it passes for random]
    — Leibniz, Discours de métaphysique, VI, 1686
    Atr ICCS-2007, in the plenary talk by Gregory Chaitin, who quoted Leibniz, and claimed that Leibniz had been writing very modern Complex Systems concepts, I commented on my late coauthor Nobel laureate Richard Feynman’s (never published) speculations that there might be an infinite number of physical laws is in uneasy tension with Leibniz’s min-max Theomathematics.

  15. “…Isn’t it likely that somewhere in that universe there would be sentient creatures…?”
    Why does the answer to this question matter? Whence any discomfort with the notion that we might be unlikely? As soon as you say “we do not need to hypothesize a god because we are not unlikely enough”, you have already bought into the god-talk frame and trapped yourself into trying to prove that we are below some threshold of unlikeliness; but that threshold cannot be objectively set and I don’t think the proposition could be proven in any case.
    How would you characterize the level of unlikeliness that you would be uncomfortable with?

Leave a Reply

Your email address will not be published. Required fields are marked *