{"id":5912,"date":"2012-02-24T10:30:01","date_gmt":"2012-02-24T17:30:01","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=5912"},"modified":"2012-02-24T10:30:01","modified_gmt":"2012-02-24T17:30:01","slug":"what-increases-when-a-self-organizing-system-organizes-itself-logical-depth-to-the-rescue","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2012\/02\/24\/what-increases-when-a-self-organizing-system-organizes-itself-logical-depth-to-the-rescue\/","title":{"rendered":"What increases when a self-organizing system organizes itself? Logical depth to the rescue."},"content":{"rendered":"<p><strong><\/strong><em> (An earlier version of this post appeared in the latest <a href=\"http:\/\/www.aps.org\/units\/gqi\/newsletters\/upload\/vol6num3.pdf\">newsletter<\/a> of the American Physical Society\u2019s special interest group on Quantum Information.)<\/em><br \/>\nOne of the most grandly pessimistic ideas from the 19<sup>th<\/sup> century is that of\u00a0 &#8220;heat death&#8221; according to which a closed system, or one coupled to a single heat bath at thermal\u00a0 equilibrium,\u00a0 eventually inevitably settles into an uninteresting state devoid of life or macroscopic motion.\u00a0 Conversely, in an idea dating back to Darwin and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Herbert_Spencer\">Spencer<\/a>, nonequilibrium boundary conditions are thought to have caused or allowed the biosphere to self-organize over geological time.\u00a0 Such godless creation, the bright flip side of the godless hell of heat death, nowadays seems to worry creationists more than Darwin\u2019s initially more inflammatory idea that people are descended from apes. They have fought back, using superficially scientific arguments, in their masterful <a href=\"https:\/\/www.youtube.com\/watch?v=FZFG5PKw504\">peanut butter video<\/a>.<br \/>\n<img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" title=\"3scenes\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2012\/02\/3scenes.jpg?resize=525%2C165&#038;ssl=1\" alt=\"Self-organization versus heat death\" width=\"525\" height=\"165\" \/><br \/>\nMuch simpler kinds of complexity generation occur in toy models with well-defined dynamics, such as this one-dimensional reversible cellular automaton.\u00a0 Started from a simple initial condition at the left edge (periodic, but with a symmetry-breaking defect) it generates a deterministic wake-like history of growing size and complexity.\u00a0 (The automaton obeys a second order transition rule, with a site\u2019s future differing from its past iff exactly two of its first and second neighbors in the current time slice, not counting the site itself, are black and the other two are white.)<br \/>\n<figure id=\"attachment_6071\" aria-describedby=\"caption-attachment-6071\" style=\"width: 600px\" class=\"wp-caption alignnone\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\" wp-image-6071\" title=\"wake\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2012\/02\/Q2r1aq.jpg?resize=525%2C282&#038;ssl=1\" alt=\"\" width=\"525\" height=\"282\" \/><figcaption id=\"caption-attachment-6071\" class=\"wp-caption-text\">Fig 2.<\/figcaption><\/figure><br \/>\nTime \u2192<br \/>\nBut just what is it that increases when a self-organizing system organizes itself?<br \/>\nSuch organized complexity is not a thermodynamic potential like entropy or free energy.\u00a0 To see this, consider transitions between a flask of sterile nutrient solution and the bacterial culture it would become if inoculated by a single seed bacterium.\u00a0 Without the seed bacterium, the transition from sterile nutrient to bacterial culture is allowed by the Second Law, but prohibited by a putative \u201cslow growth law\u201d, which prohibits organized complexity from increasing quickly, except with low probability.<br \/>\n<figure id=\"attachment_6072\" aria-describedby=\"caption-attachment-6072\" style=\"width: 914px\" class=\"wp-caption alignnone\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6072\" title=\"FLASKS\" src=\"https:\/\/i0.wp.com\/dabacon.org\/pontiff\/wp-content\/uploads\/2012\/02\/FLASKS.jpg?resize=525%2C275&#038;ssl=1\" alt=\"\" width=\"525\" height=\"275\" \/><figcaption id=\"caption-attachment-6072\" class=\"wp-caption-text\">Fig. 3<\/figcaption><\/figure><br \/>\nThe same example shows that organized complexity is not an extensive quantity like free energy.\u00a0 The free energy of a flask of sterile nutrient would be little altered by adding a single seed bacterium, but its organized complexity must have been greatly increased by this small change.\u00a0 The subsequent growth of many bacteria is accompanied by a macroscopic drop in free energy, but little change in organized complexity.<br \/>\nThe relation between universal computer programs and their outputs has long been viewed as a formal analog of the relation between theory and phenomenology in science, with the various programs generating a particular output <em>x<\/em> being analogous to alternative explanations of the phenomenon <em>x<\/em>.\u00a0 This analogy draws its authority from the ability of universal computers to execute all formal deductive processes and their presumed ability to simulate all processes of physical causation.<br \/>\nIn algorithmic information theory the <em>Kolmogorov complexity<\/em>, of a bit string <em>x<\/em> as defined as the size in bits of its <em>minimal program<\/em> <em>x<\/em>*, the smallest (and lexicographically first, in case of ties) program causing a standard universal computer <em>U<\/em> to produce exactly <em>x<\/em> as output and then halt.<br \/>\n<em>\u00a0x*<\/em> = min{<em>p<\/em>: <em>U<\/em>(<em>p<\/em>)=<em>x<\/em>}<br \/>\nBecause of the ability of universal machines to simulate one another, a string\u2019s Kolmogorov complexity is machine-independent up to a machine-dependent additive constant, and similarly is equal to within an additive constant to the string\u2019s <em>algorithmic entropy<\/em> <em>H<span style=\"text-decoration: underline\"><sub>U<\/sub><\/span><\/em>(<em>x<\/em>), the negative log of the probability that <em>U<\/em> would output exactly <em>x<\/em> and halt if its program were supplied by coin tossing.\u00a0 \u00a0\u00a0Bit strings whose minimal programs are no smaller than the string itself are called <em>incompressible,<\/em> or <em>algorithmically random, <\/em>because they lack internal structure or correlations that would allow them to be specified more concisely than by a verbatim listing. Minimal programs themselves are incompressible to within <em>O<\/em>(1), since otherwise their minimality would be undercut by a still shorter program.\u00a0 By contrast to minimal programs, any program <em>p<\/em> that is significantly compressible is intrinsically <em>implausible<\/em> as an explanation for its output, because it contains internal redundancy that could be removed by deriving it from the more economical hypothesis <em>p<\/em>*.\u00a0 In terms of Occam\u2019s razor, a program that is compressible by <em>s <\/em>bits deprecated as an explanation of its output because it suffers from \u00a0<em>s <\/em>bits worth of ad-hoc assumptions.<br \/>\nThough closely related<a title=\"\" href=\"#_ftn1\">[1]<\/a> to statistical entropy, Kolmogorov complexity itself is not a good measure of organized complexity because it assigns high complexity to typical random strings generated by coin tossing, which intuitively are trivial and unorganized. \u00a0Accordingly many authors have considered modified versions of Kolmogorov complexity\u2014also measured in entropic units like bits\u2014hoping thereby to quantify the nontrivial part of a string\u2019s information content, as opposed to its mere randomness. \u00a0A recent example is Scott Aaronson\u2019s notion of <a href=\"http:\/\/www.scottaaronson.com\/blog\/?p=762--\">complextropy<\/a>, defined roughly as the number of bits in the smallest program for a universal computer to efficiently generate a probability distribution relative to which <em>x <\/em>\u00a0cannot efficiently be recognized as atypical.<br \/>\nHowever, I believe that entropic measures of complexity are generally unsatisfactory for formalizing the kind of complexity found in intuitively complex objects found in nature or gradually produced from simple initial conditions by simple dynamical processes, and that a better approach is to characterize an object\u2019s complexity by the amount of number-crunching (i.e. computation time, measured in machine cycles, or more generally other dynamic computational resources such as time, memory, and parallelism) required to produce the object from a near-minimal-sized description.<br \/>\nThis approach, which I have called \u00a0<a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.70.4331&amp;rep=rep1&amp;type=pdf\">logical depth<\/a>, is motivated by a common feature of intuitively organized objects found in nature: the internal evidence they contain of a nontrivial causal history.\u00a0 If one accepts that an object\u2019s minimal program represents its most plausible explanation, then the minimal program\u2019s run time represents the number of steps in its most plausible history.\u00a0 To make depth stable with respect to small variations in <em>x<\/em> or <em>U<\/em>, it is necessary also to consider programs other than the minimal one, appropriately weighted according to their compressibility, resulting in the following two-parameter definition.<\/p>\n<ul>\n<li>An object\u00a0 <em>x\u00a0 <\/em>is called\u00a0 <em>d<\/em>-deep with \u00a0<em>s<\/em> \u00a0bits significance iff every program for <em>U<\/em> to compute <em>x<\/em> in time &lt;<em>d<\/em> is compressible by at least <em>s<\/em> bits. This formalizes the idea that every hypothesis for \u00a0<em>x <\/em>\u00a0to have originated more quickly than in time\u00a0 <em>d<\/em>\u00a0 contains\u00a0 <em>s<\/em> bits worth of ad-hoc assumptions.<\/li>\n<\/ul>\n<p>Dynamic and static resources, in the form of the parameters\u00a0 <em>d \u00a0<\/em>and <em>\u00a0s,<\/em>\u00a0 play complementary roles in this definition:\u00a0 <em>d<\/em>\u00a0 as the quantifier and\u00a0 <em>s<\/em>\u00a0 as the certifier of the object\u2019s nontriviality. \u00a0Invoking the two parameters in this way not only stabilizes depth \u00a0\u00a0with respect to small variations of\u00a0 <em>x<\/em> and <em>U<\/em>, but also makes it possible to prove that depth obeys a slow growth law, without which any mathematically definition of organized complexity would seem problematic.<\/p>\n<ul>\n<li>A fast deterministic process cannot convert shallow objects to deep ones, and a fast stochastic process can only do so with low probability.\u00a0 (For details see <a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.70.4331&amp;rep=rep1&amp;type=pdf\">Bennett88<\/a>.)<\/li>\n<\/ul>\n<p>&nbsp;<br \/>\nLogical depth addresses many infelicities and problems associated with entropic measures of complexity.<\/p>\n<ul>\n<li>It does not impose an arbitrary rate of exchange between the independent variables of strength of evidence and degree of nontriviality of what the evidence points to, nor an arbitrary maximum complexity that an object can have, relative to its size.\u00a0 Just as a microscopic fossil can validate an arbitrarily long evolutionary process, so a small fragment of a large system, one that has evolved over a long time to a deep state, can contain evidence of entire depth of the large system, which may be more than exponential in the size of the fragment.<\/li>\n<li>It helps explain the increase of complexity at early times and its decrease at late times by providing different mechanisms for these processes.\u00a0 In figure 2, for example, depth increases steadily at first because it reflects the duration of the system\u2019s actual history so far.\u00a0 At late times, when the cellular automaton has run for a generic time comparable to its Poincare recurrence time, the state becomes shallow again, not because the actual history was uneventful, but because evidence of that history has become degraded to the point of statistical insignificance, allowing the final state to be generated quickly from a near-incompressible program that short-circuits the system\u2019s actual history.<\/li>\n<li>It helps explain while some systems, despite being far from thermal equilibrium, never self-organize.\u00a0 For example in figure 1, the gaseous sun, unlike the solid earth, appears to lack means of remembering many details about its distant past.\u00a0 Thus while it contains evidence of its age (e.g. in its hydrogen\/helium ratio) almost all evidence of particular details of its past, e.g. the locations of sunspots, are probably obliterated fairly quickly by the sun\u2019s hot, turbulent dynamics.\u00a0 On the other hand, systems with less disruptive dynamics, like our earth, could continue increasing in depth for as long as their nonequilibrium boundary conditions persisted, up to an exponential maximum imposed by Poincare recurrence.<\/li>\n<li>Finally, depth is robust with respect to transformations that greatly alter an object\u2019s size and Kolmogorov complexity, and many other entropic quantities, provided the transformation leaves intact significant evidence of a nontrivial history. Even a small sample of the biosphere, such as a single DNA molecule, contains such evidence.\u00a0 Mathematically speaking, the depth of a string <em>x<\/em> is not much altered by replicating it (like the bacteria in the flask), padding it with zeros or random digits, or passing it though a noisy channel (although the latter treatment decreases the significance parameter <em>s<\/em>). \u00a0If the whole history of the earth were <em>derandomized, <\/em>by substituting deterministic pseudorandom choices for all its stochastic accidents, the complex objects in this substitute world would have very little Kolmogorov complexity, yet their depth would be about the same as if they had resulted from a stochastic evolution.<\/li>\n<\/ul>\n<p>The remaining infelicities of logical depth as a complexity measure are those afflicting computational complexity and algorithmic entropy theories generally.<\/p>\n<ul>\n<li>Lack of tight lower bounds: because of open P=PSPACE question one cannot exhibit a system that provably generates depth more than polynomial in the space used.<\/li>\n<li>Semicomputability:\u00a0 deep objects can be proved deep (with exponential effort) but shallow ones can\u2019t be proved shallow.\u00a0 The semicomputability of depth, like that of Kolmogorov complexity, is an unavoidable consequence of the unsolvability of the halting problem.<\/li>\n<\/ul>\n<p>The following observations can be made partially mitigating these infelicities.<\/p>\n<ul>\n<li>Using the theory of cryptographically strong pseudorandom functions one can argue (if such functions exist) that deep objects can be produced efficiently, in time polynomial and space polylogarithmic in their depth, and indeed that they are produced efficiently by some physical processes.<\/li>\n<li>Semicomputability does not render a complexity measure entirely useless. Even though a particular string cannot be proved shallow, and requires an exponential amount of effort to prove it deep, the depth-producing properties of stochastic processes can be established, assuming the existence of cryptographically strong pseudorandom functions. This parallels the fact that while no particular string can be proved to be algorithmically random (incompressible), it can be proved that the statistically random process of coin tossing produces algorithmically random strings with high probability.<\/li>\n<\/ul>\n<p>&nbsp;<br \/>\nGranting that a logically <em>deep <\/em>object is one plausibly requiring a lot of computation to produce, one can consider various related notions:<\/p>\n<ul>\n<li>An object \u00a0<em>y \u00a0<\/em>is <em>deep relative<\/em> to \u00a0<em>x<\/em> \u00a0if all near-minimal sized programs for computing \u00a0<em>y \u00a0<\/em>from \u00a0<em>x<\/em> \u00a0are slow-running. \u00a0Two shallow objects may be deep relative to one another, for example a random string and its XOR with a deep string.<\/li>\n<li>An object can be called <em>cryptic <\/em>if it is computationally difficult to obtain a near- minimal sized program for the object from the object itself, in other words if any near-minimal sized program for <em>x<\/em> is deep relative to <em>x<\/em>.\u00a0 One-way functions, if they exist, can be used to define cryptic objects; for example, in a computationally secure but information theoretically insecure cryptosystem, plaintexts should be cryptic relative to their ciphertexts.<\/li>\n<li>An object can be called <em>ambitious<\/em> if, when presented to a universal computer as input, it causes the computer to embark on a long but terminating computation. Such objects, though they determine a long computation, do not contain evidence of it actually having been done. \u00a0Indeed they may be shallow and even algorithmically random.<\/li>\n<li>An object can be called <em>wise <\/em>if it is deep and a large and interesting family of other deep objects are shallow relative to it. Efficient oracles for hard problems, such as the characteristic function of an NP-complete set, or the characteristic set <em>K<\/em> of the halting problem, are examples of wise objects. \u00a0Interestingly, Chaitin\u2019s omega is an exponentially more compact oracle for the halting problem than <em>K<\/em> is, but it is so inefficient to use that it is shallow and indeed algorithmically random.<\/li>\n<\/ul>\n<p>Further details about these notions can be found in <a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.70.4331&amp;rep=rep1&amp;type=pdf\">Bennett88<\/a>. \u00a0<a href=\"https:\/\/rjlipton.wordpress.com\/2012\/02\/12\/how-deep-is-your-coloring\/\">K.W. Regan<\/a> in Dick Lipton\u2019s blog discusses the logical depth of Bill Gasarch\u2019s recently discovered solutions to the 17-17 and 18&#215;18 four-coloring problem<br \/>\nI close with some comments on the relation between organized complexity and thermal disequilibrium, which since the 19<sup>th<\/sup> century has been viewed as an important, perhaps essential, prerequisite for self-organization.\u00a0\u00a0 Broadly speaking, locally interacting systems at thermal equilibrium obey the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Gibbs_phase_rule\">Gibbs phase rule<\/a>, and its generalization in which the set of independent parameters is enlarged to include not only intensive variables like temperature, pressure and magnetic field, but also all parameters of the system\u2019s Hamiltonian, such as local coupling constants.\u00a0 \u00a0A consequence of the Gibbs phase rule is that for generic values of the independent parameters, i.e. at a generic point in the system\u2019s phase diagram, only one phase is thermodynamically stable.\u00a0 This means that if a system\u2019s independent parameters are set to generic values, and the system is allowed to come to equilibrium, its structure will be that of this unique stable Gibbs phase, with spatially uniform properties and typically short-range correlations.\u00a0\u00a0 Thus for generic parameter values, when a system is allowed to relax to thermal equilibrium, it entirely forgets its initial condition and history and exists in a state whose structure can be adequately approximated by stochastically sampling the distribution of microstates characteristic of that stable Gibbs phase. \u00a0Dissipative systems\u2014those whose dynamics is not microscopically reversible or whose boundary conditions prevent them from ever attaining thermal equilibrium\u2014are exempt from the Gibbs phase rule for reasons discussed in <a href=\"http:\/\/ipv4.google.com\/sorry\/IndexRedirect?continue=http:\/\/scholar.google.com\/scholar_url%3Fhl%3Den%26q%3Dhttp:\/\/www.de.ufpe.br\/~toom\/others-articles\/engmat\/BEN-GRI.pdf%26sa%3DX%26scisig%3DAAGBfm1tR0BsQHIMrecnoVGn9Wh-u_TYZA%26oi%3Dscholarr&amp;q=CGMSBGyo8pIYuOflrAUiGQDxp4NLipqj7jUcl7x6mYxxCbyvDb4Mfoo\">BG85<\/a>, and so are capable, other conditions being favorable, of producing structures of unbounded depth and complexity in the long time limit. For further discussion and a comparison of logical depth with other proposed measures of organized complexity, see <a href=\"http:\/\/web.archive.org\/web\/20061015234424\/http:\/\/www.research.ibm.com\/people\/b\/bennetc\/bennettc19903b272b10.pdf\">B90<\/a>.<br \/>\n&nbsp;<\/p>\n<div><\/p>\n<hr align=\"left\" size=\"1\" width=\"33%\" \/>\n<div>\n<a title=\"\" href=\"#_ftnref1\">[1]<\/a> An elementary result of algorithmic information theory is that for any probability ensemble of bit strings (representing e.g. physical microstates), the ensemble\u2019s Shannon entropy differs from the expectation of its members\u2019 algorithmic entropy by at most of the number of bits required to describe a good approximation to the ensemble.\n<\/div>\n<\/div>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>(An earlier version of this post appeared in the latest newsletter of the American Physical Society\u2019s special interest group on Quantum Information.) One of the most grandly pessimistic ideas from the 19th century is that of\u00a0 &#8220;heat death&#8221; according to which a closed system, or one coupled to a single heat bath at thermal\u00a0 equilibrium,\u00a0 &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/dabacon.org\/pontiff\/2012\/02\/24\/what-increases-when-a-self-organizing-system-organizes-itself-logical-depth-to-the-rescue\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;What increases when a self-organizing system organizes itself? Logical depth to the rescue.&#8221;<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[20,32,41,53,70],"tags":[],"class_list":["post-5912","post","type-post","status-publish","format-standard","hentry","category-computer-science","category-general","category-mathematics","category-physics","category-science"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5912","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=5912"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/5912\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=5912"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=5912"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=5912"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}