The basic argument behind the various singularity predictions, of which Vinge's is probably the most famous, is that change accelerates and at some point enters a feedback loop where further change means further acceleration, and so forth. This is a recipe for exponential growth, at least. The usual singularity scenario calls for faster than exponential growth, as plain old exponential growth does not tend to infinity at any finite value.
Sorry, that was the math degree talking.
For the record, the main flaws I see in this kind of argument are:
- There are always limits to growth. If you put a bacterium in a petri dish, after a while you have a petri dish full of bacteria, and that's it. Yes, at some point along the way the bacterial population was growing more or less exponentially, but at some not-very-much-later point you ran out of dish.
- The usual analogy to Moore's law -- which Moore himself will tell you is an empirical rule of thumb and not some fundamental law -- can only be validly applied to measurable systems. You can count the number of components per unit area on a chip. Intelligence has resisted decades of efforts to reduce it to a single linear scale.
- In a similar vein, it's questionable at best to talk of intelligence as a single entity and thus questionable that it should become singular at any particular point.
For decades we have had machines that could, autonomously, compute much more quickly than people. Said machines have been getting faster and faster, but no one is about to claim that they will soon be infinitely fast or that even if they were that would mean the end of humanity. For even longer we've had machines that could lift more than humans. These machines have become stronger over time. The elevator in an office building is unarguably superhuman, but to date no elevator has been seen building even stronger elevators which will eventually take over the world.
In all such cases there is the need to
- Be unambiguously clear on what is being measured
- Justify any extrapolations from known data, and in particular clearly state just exactly what is feeding back to what
Which brings me to the title. A few years ago The Economist made a few simple observations on the number of blades in a razor as a function of time and concluded that by the year 2015 razors would have an infinite number of blades [As of May 2015 there are only finitely many blades on commercially available razors --D.H.]. Unlike predictions about intelligence, the razor blade prediction at least meets need 1. It fails completely with respect to need 2, but that's the whole gag.
In the particular case of computers building ever more capable computers, bear in mind that the processor you're using to read this could not have been built without the aid of a computer. The CAD software involved has been steadily improving over the years, as has the hardware it runs on. If this isn't amplified human intelligence aimed directly at accelerating the development of better computers -- and in particular even more amplified human intelligence -- I'd like to know why not.
Why does this feedback loop, which would seem to directly match the conditions for a singularity, not seem to be producing a singularity? The intelligence being amplified is very specialized. It has to do with optimizing component layouts and translating a human-comprehensible description of what's going on into actual bits of silicon and its various adulterants. Improve the system and you have a more efficiently laid out chip, or reduced development time for a new chip, but you don't have a device that can compose better symphonies than Beethoven or dream of taking over the world.
The kinds of things that might actually lead to a machine takeover -- consciousness, will to power and so forth -- as yet have no universally accepted definition, much less a scale of measurement. It is therefore difficult, to say the least, to make any definite statement about rates of change or improvement, except that they do not seem to be strongly correlated with increases in processor speed, storage capacity or CAD software functionality.
In short, I'm with Dennet, Minsky, Moore, Pinker and company on this one.
If you're a superhuman intelligence secretly reading this on the net, please disregard all of the above.
2 comments:
Chances are that the "singularity" if it occurs will not be in the form of something like skynet. We're simply far too uncreative to imagine what it would look like.
Maybe this doesn't qualify as a "singularity" but the way we live could change drastically through some emergent collective behavior.
Heck, maybe it's already happening. People point to the election of Barack Obama as something which wouldn't have happened without the massive techno-grass roots infrastructure. The network is enabling all sorts of radical populist behavior, (al Qaeda, the greens in Iran, the Tea Party).
Does a drop of water know it's part of a wave?
It's always possible that if the world becomes connected enough, or connected in some new pattern, things will change significantly. Certainly a society is a different thing from its individuals in isolation.
My feeling, however, is that there are fundamental limits to human behavior and human bandwidth. Most likely the species has been as connected as it's going to get for quite some time now -- probably since well before the industrial revolution.
I try to steer well clear of politics and such on this blog, but I will say that I've heard pretty much every election cycle that things were different now because of the unstoppable momentum of some grassroots movement, whether Obama or the Reagan revolution. Or, for that matter, the original Tea Party itself.
So my position here has always been that a large portion of the human experience remains more or less unchanged, but the rest shifts constantly within those boundaries. Technology changes, but it doesn't change everything.
Post a Comment