Thursday, March 12, 2009

The game is changing

When I'm beating my "not-so-disruptive technology" drum, I'm not trying to say that technological change has no effect. Rather, technological change is more gradual than we might sometimes think. Over time, it can have significant effects. The rule of thumb I've heard is that predictions overestimate the short term and underestimate the long term.

For example, over my lifetime Moore's law has tracked orders of magnitude worth of steady improvement in hardware. This progress has been accompanied by a steady stream of discoveries and algorithmic improvements on the software side (and a counterbalancing accumulation of layers between the code and the hardware, but that's a separate story).

In the early days of AI, there was much breathless talk (not necessarily by those actually doing the research) about the "electronic brain" being able to match and surpass the human brain. Only later did it become clear that this vastly underestimated the sheer computational power of a human brain -- or a gerbil brain, for that matter.

A classic AI program, say SHRDLU, ran on a PDP-6 and processed small chunks of text to maintain a set of assertions about a toy "block world". It was definitely a neat hack, and it looked pretty impressive since, as everyone knew, language processing is one of the highest levels of thought and therefore one of the most difficult [Re-reading, that's not quite right. Language processing really is hard in its full glory. However SHRDLU, like everything else at the time, did fairly rudimentary language processing. The "gee-whiz" part was that it could keep track of spatial relations between blocks in its block world. My point was that this is actually much, much simpler than, say, walking. So for "language processing" read "reasoning about spatial relations".].

In fact, "high-level" abstract thought is going to be about the easiest type of thought for a computer to mimic -- the computer itself is the product of exactly that sort of thought, so a certain structural suitability is to be expected. This is probably clearer in hindsight than at the time.

An oversimplified view of what happened next is that we began to understand and appreciate a few key facts:
  • Biological minds are much more complex than a PDP-6. There are hundreds of billions of neurons in the brain, versus (somewhere around) hundreds of thousands of components in a PDP-6, most of which would have been core memory (by which I mean actual magnetic cores).
  • Biological minds are distributed and work in parallel. For example, the eye is not a simple camera. The retina and optic nerve do significant processing before the image even reaches the brain.
  • Biological computation is not based on abstract reasoning. Rather, the other way around. Biological computation has a much more statistical, approximate flavor than traditional symbol-bashing.
and so, again oversimplifying, the AI bubble burst. Everyone now knew that AI was a pipe dream. Best to go back to real work.

"Real work" meant a lot of things, but it included:
  • Figuring out how to handle much larger amounts of data than, say, dozens or hundreds or even thousands of assertions about blocks in a toy world, and how to make use of hardware that (currently) throws around prefixes like giga- and tera-.
  • Figuring out how to build distributed systems that work in parallel.
  • Figuring out how to handle messy, approximate real-world problems.
Hmm ... three bullet points up there ... three bullet points down here ... almost like they were meant to be compared and contrasted ...

The jumping-off point for all this was observation in the previous post that our internal data network appears to have capacity comparable to fairly fast off-the-shelf digital networks. This rough parity is a fairly recent development.

No comments: