{"id":2109,"date":"2008-10-22T12:00:54","date_gmt":"2008-10-22T19:00:54","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=2109"},"modified":"2008-10-22T12:00:54","modified_gmt":"2008-10-22T19:00:54","slug":"majority-gate-reality","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2008\/10\/22\/majority-gate-reality\/","title":{"rendered":"Majority Gate Reality"},"content":{"rendered":"<p>The universe doesn&#8217;t always operate the way we want it to.  No, I&#8217;m not talking about the stock market (unless you&#8217;ve been short lately), I&#8217;m talking about the role of error in logical deterministic systems!  When you zoom down far enough into any computing device you&#8217;ll see that its constituent components don&#8217;t behave in a completely digital manner and all sorts of crazy random crap (technical term) is going on.  Yet, on a larger scale, digital logical can and does emerge.<br \/>\nToday heading to work in the early dawn I was pondering what all of this meant for our notion of reality.<br \/>\n<!--more--><br \/>\nSome of the seminal work in how ordered digital logic can emerge from faulty components was done by John von Neumann in the 1950s and published in 1956.  In &#8220;Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components&#8221; (available <a href=\"http:\/\/www.dna.caltech.edu\/courses\/cs191\/\">here<\/a>) von Neumann asked the question of whether it is possible to reliably compute when the constituent gates doing the computing can fail.  In particular von Neumann considered a model in which each individual gate in a digital circuit can fail with probability p and that each gate fails independently of every other gate.  (For the history buffs you might check out the acknowledgements in this paper where a future Nobel prize winner in physics makes the acknowledgement list.)<br \/>\nWhat most people know about von Neumann&#8217;s early work is that he was able to show how one could turn a network with failing gates into one which was highly reliable using some basic ideas in error correction along with some careful choices of circuits (assuming the probability of an individual gate failing was below some threshold which von Neumann estimates at about 1 percent.  Also, technically, von Neumann&#8217;s work was a bit incomplete, relying on a random choice of permutations.  An explicit construction was later provided by Pippenger.)<br \/>\nWhat most people don&#8217;t know about von Neumann&#8217;s work was his first observation about computing with faulty components.  Suppose you have a circuit with faulty components.  Further suppose that at the end of the day you would like your computation to output a single bit.  If we require that this single bit be carried on one of the wires of our digital circuit them we immediately have a problem.  No matter what we do, the probability that the last circuit element which outputs our output bit fails is p.  Thus the probability that we will get the wrong answer is p.  In other words if we depend on individual bits in noisy networks, we are doomed to fail!<br \/>\nOf course the computer scientist in you says, well: that&#8217;s not a big deal, why not just run the computation many times and then take the majority vote of each of the bits in these runs.  Fine, except that this then requires that either a circuit element must take the majority (and we run into the same problem) or that <em>you<\/em> must take the majority of the different runs.  But you who read this, what makes you think that you are able to reliably perform this majority operation?<br \/>\nThe way around the single bit problem, of course, is to build a reliable wire out of N unreliable wires.  Then you can set a threshold of zeros and ones which you can &#8220;interpret&#8221; as a zero and one.  In other words you take N bits and if, say 70 percent of them are 0s you interpret that as a 0 and if, say 70 percent of them are 1s you interpret that as a 1.  Anything else you can interpret as &#8220;ambiguous.&#8221;  Thus digital 0 and digital 1 emerge out of a sort of democracy of bits voting.  In practice, when you think about how classical computers achieve robust computing, you can explicitly see this sort of effect going on.<br \/>\nBut what does this say, exactly, on a deeper level.? On the one hand it seems to me that this is a fundamental challenge to those who believe that digital computing is somehow the natural language of the universe.  If in the universe digital computation must emerge from the noisy muck, what right do we have to believe that it is the natural language to express how the universe works?<br \/>\nIn a similar vein am I the only one who finds von Neumann&#8217;s arguments strangely disturbing when I think about my own thought process?  We like our thoughts to be fundamental bits of how we construct reality.  But these bits must emerge again from the muck of noisy constituents.  The reality in our head emerges unscathed only when we interpret it with a mythical majority vote gate which never fails.<br \/>\nWhich is all to say that thinking while you&#8217;re heading to work about majority vote gates and what your actually thinking is enough to send this poor saps head into a state of confusion.  Or maybe that&#8217;s just a sign that the bits in my brain are voting &#8220;ambiguous&#8221; at this very moment?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The universe doesn&#8217;t always operate the way we want it to. No, I&#8217;m not talking about the stock market (unless you&#8217;ve been short lately), I&#8217;m talking about the role of error in logical deterministic systems! When you zoom down far enough into any computing device you&#8217;ll see that its constituent components don&#8217;t behave in a &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/dabacon.org\/pontiff\/2008\/10\/22\/majority-gate-reality\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Majority Gate Reality&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[20,50,53],"tags":[],"class_list":["post-2109","post","type-post","status-publish","format-standard","hentry","category-computer-science","category-off-the-deep-end","category-physics"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/2109","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=2109"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/2109\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=2109"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=2109"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=2109"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}