{"id":2041,"date":"2008-08-25T11:00:18","date_gmt":"2008-08-25T18:00:18","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=2041"},"modified":"2008-08-25T11:00:18","modified_gmt":"2008-08-25T18:00:18","slug":"self-correcting-quantum-computers-part-ii","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2008\/08\/25\/self-correcting-quantum-computers-part-ii\/","title":{"rendered":"Self-Correcting Quantum Computers, Part II"},"content":{"rendered":"<p>Why is classical computing possible at all?  A silly question, but one which never ceases to amaze me.   Part II of my attempt to explain one of my main research interests in quantum computing: &#8220;self-correcting quantum computers.&#8221; Prior parts: Part I<br \/>\n<!--more--><br \/>\nLast time I discussed how quantum computing was a lot like classical probabilistic computing.  Given this, one can think about a question which seems silly at first: how is it possible to compute when you have a classical probabilistic computer?<\/p>\n<h3> Why Is Classical Computation Possible? <\/h3>\n<p>Classical computers are both digital and deterministic.  But if you go and take a microscope to your classical computer what you will see isn&#8217;t anything like these two categories.  For example, if you go and look at the electrons in your silicon based transistor, not all of the electrons are doing the same things. Even in todays ultra pure system, real systems have defects (are dirty) and the electrons in the system are behaving in all sorts of strange ways.  Of course the aggregate effect of all of these electrons bouncing around and doing all sorts of crazy things is, when digitized, deterministic.  Thus the transistors in your computer are digital and deterministic in spite of the fact that the systems out of which they are constructed are both analog and probabilistic (or worse analog and quantum!)  How is this possible?  How is it possible to take probabilistic (read noisy) classical analog systems and turn them into deterministic digital computers?  The answer to these questions isn&#8217;t as obvious at first thought as you might think.  The first part of the answer is how to go from analog to digital.  This is done, in most physical systems, by applying a discretization to some analog set of configurations.  Of course, any such discretization must map some values which are nearby in an analog space to differing digital values.  So there are always precision questions in defining digital configurations.  But, and here is an important point, it is often possible to take an analog system and keep it out of the regimes of difficulty.<\/p>\n<table border=\"1\" align=\"right\">\n<tr>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"120\">Claude Shannon<\/td>\n<\/tr>\n<\/table>\n<p>Okay so going from analog to digital seems fairly straightforward.  But what about going from probabilistic to deterministic.  This problem, of how to take systems which change in time according to probabilistic laws, and turn them into systems which compute deterministically has quite a few answers.  If we approach this problem from a simple theoretical perspective, then the answer really comes from the theory of classical error correction, and its further extension to the theory of fault-tolerant classical computation.  The former of these is founded in the groundbreaking work of Claude Shannon.  What Shannon and others showed was that if a classical system is sent through a probabilistic channel which destroys deterministic information, then repeated use of this channel, along with the ideas of classical error correction, can be used to make this probabilistic channel behave nearly deterministically (where nearly means as close to nearly as you want with a good scaling in the resources used.)<br \/>\nThe basic idea of classical error correction is simple.  If you are sending information down a line where these information can be distorted, then you can lessen the probability of this distortion by encoding your message redundantly.  When such an encoding is performed on information, then after it is sent down the noisy line, a suitable decoding can be performed such that if the noise is not too strong, the probability of correctly transmitting the information is greater than if it was just used without the encoding.  So, by using classical error correction, one can turn a channel where information evolves probabilistically into one which behave much less probabilistically.  If enough encoding is used, this makes the transmission of information effectively deterministic.<\/p>\n<table border=\"1\" align=\"right\">\n<tr>\n<td><\/td>\n<\/tr>\n<tr>\n<td width=\"120\">John von Neumann<\/td>\n<\/tr>\n<\/table>\n<p>A refinement of the idea of classical error correction is the idea of fault-tolerant computation.  This is the study of what happens when you are not just transmitting information down a noisy\/probabilistic channel, but also allow all of the components of your computer to fail.  Thus for instance, in fault-tolerant computation, things like your logic gates can act probabilistically.  Further even things like preparing a classical system in a well defined state can be taken into account in fault-tolerant computation.  Von Neumann was one of the first to consider how to build an entire computer out of components which were probabilistic.  Von Neumann&#8217;s theory showed, that in principle, if noise was not too strong, then it is possible to build a fault-tolerant classical computer out of faulty components.<\/p>\n<h3>Next Time..<\/h3>\n<p>In part III, I&#8217;ll discuss how the story of why classical computation is possible is actually connected to reality.  In other words, how does it relate to how your hard drive operates?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Why is classical computing possible at all? A silly question, but one which never ceases to amaze me. Part II of my attempt to explain one of my main research interests in quantum computing: &#8220;self-correcting quantum computers.&#8221; Prior parts: Part I<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[65,76],"tags":[],"class_list":["post-2041","post","type-post","status-publish","format-standard","hentry","category-quantum-computing","category-self-meet-center-center-meet-self"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/2041","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=2041"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/2041\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=2041"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=2041"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=2041"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}