Book: On Intelligence
Summary: Jeff Hawkins is one of the creators of the Palm Pilot. He is also someone who harbored a strong desire to understand intelligence throughout his life, trying and failing to get into MIT to study AI, and then going on to entrepreneurial success with the Palm Pilot and Handspring. In 2004 he published “On Intelligence” with science writer Sandra Blakeslee, which is his attempt to come up with a theory of intelligence (note that I refer to the theory as Hawkins’ even though the book is by Hawkins and Blakeslee.) There is much here that is likely controversial. But there is also a lot that, reading this in 2016 when multi-layered neural nets are all the rage, now sounds very prescient.
One of Hawkins’ central ideas is the memory prediction framework. Essentially the brain stores memories and then uses those memories to make predictions which then propagate out to actions and influence memories. He talks a lot about “invariant representations” and hierarchies, with a special focus on cortical columns in the neocortex for carrying all this out.
Rating: Strongly recommend. While there are things I found myself disagreeing with (an argument about parallel computers that Hawkins gives seemed way off base to me), or wishing for more details, the book should spawn your own neurons to fire in interest at the wide range of topics the authors attempts to bring together in support of Hawkins’ theory. There is a tone of outsider seeing things clearly when all the narrow academics couldn’t, but this personally didn’t grate on me much (contrast this with “A New Kind of Science” for example).
Speculative: If you must know, I have my own views on predicting the future. The essential idea is that local entities cannot predict their own future due to the local nature of the laws of physics. But in that discussion I fail to point out that this is only true to the extent that mixing in the laws and uniformity of priors about outside-the-light-cone information causes us to lack predictability. But regularity in law and non trivial priors muck with this and make prediction better than chance. In a sense this is deeply tied with Hawkins’ ideas and a core component of modern machine learning. While there is no free lunch, lunch does seem to come in particular packages and opening the bag results in nearly the same results (sometimes turkey, sometimes egg salad). The brain certainly can use this both in perceiving, but also in creating the models it uses to predict the future.
So let’s push this even further. If prediction is essential, and the local laws of physics limit, but do no eliminate, this, then might one be able to derive fundamental constraints on the memory-prediction framework from basic physics? This is along the lines of David Deutsch’s ideas about deriving the Church-Turing thesis from physics. Indeed maybe the memory-prediction framework provides us with a sort of Church-Turing thesis for intelligence. All intelligence arises from prediction feeding back to memory and action, no matter what the substrate. Hmm, seems not quite hashed out, but interesting to contemplate.