FLAGSTAFF, Arizona, July 20, 2004 -- Those of you who read this space regularly will be familiar with the concept of logarithmic time perception, or "logtime" for short. This idea, bandied about by many writers but summarized expertly by James Kenney, holds that as we get older, our perception of time changes, on a logarithmic scale based on our age. Kenney argues that the years seem to go by faster -- and, alarmingly, exponentially so -- as we get older.
Recently, Kenney called my attention to the writings of Ray Kurzweil, a pioneer both in the field of synthesizers and that of artificial intelligence (AI), on a related subject: the logarithmic acceleration of technology development. Briefly, Kurzweil argues that technology -- specifically, computer hardware and software, in combination with knowledge of how the human brain works -- is advancing so rapidly that within a short span of time, superintelligent computers that mimic the brain will be developed.
This idea is known among futurists as "the Singularity":
"'The Singularity' is a phrase borrowed from the astrophysics of black holes. The phrase has varied meanings; as used by Vernor Vinge and Raymond Kurzweil, it refers to the idea that accelerating technology will lead to superhuman machine intelligence that will soon exceed human intelligence, probably by the year 2030. The results on the other side of the 'event horizon,' they say, are unpredictable." -- Philip J. Windley, writing in the June, 2003 issue of Connect magazine
Kurzweil talks about the "Law of Accelerating Returns," under which technology development progresses at a logarithmic rate, just as James Kenney shows that our time perception progresses as we age.
Kurzweil's thesis is that in the near future, human intelligence will be superseded by that of machines that mimic, then surpass it. Further, it won't be long before we'll be able to replace a human brain -- and, depending on your theological and/or spiritual viewpoint, the soul that goes along with it -- with a machine. You'll be able to "upload" the contents of your brain to a machine that doesn't rot, get sick, and eventually die, the way "wetware" inevitably does.
Immortality has long been the Holy Grail of AI. However, such an idea is no longer the exclusive domain of science fiction and fantasy enthusiasts -- Kurzweil and a number of other very smart people, including physicist Stephen Hawking, believe that it will actually happen, and sooner than we think. Kurzweil argues that because the development of technology accelerates logarithmically, we'll achieve the Singularity, and replace wetware with hardware and software, within a few decades -- which means within the lifetime of a lot of people now living, possibly including you and me.
Who wants to be immortal?
Kurzweil describes the whole thing in great detail -- you can read his article here -- and he explains the phenomenon much better than I could. So to save time, let's just say that he's right, and that within a few decades, it becomes possible to replace the human brain with superfast, superintelligent computers that are much smarter than we are, invent better things than we do, and last forever.
Let's consider a few of the implications of what he's suggesting. The first implication, it seems to me, is that immortality, if we could achieve it, would be a good thing. This seems obvious, on the face of it -- but is it really true?
"Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon." (Susan Ertz, Anger In The Sky, 1943)
When you're immortal, you've got all eternity ahead of you, to accomplish stuff -- so why seize the day? Why do anything in particular, when tomorrow is always another day?
The human experience has always revolved around the fact that life is finite -- and, in most cases, fairly short. You grow up, you do stuff, you grow old, and you die. Take away this pattern, and life would be altered in ways we can hardly fathom.
Assume, just for the sake of discussion, that immortal superhumans would have to earn a living, just the way you and I do. So you work for a few decades, amass enough wealth to retire, and then what? Face eternity living off the income from your retirement nest egg -- which, if you're smart, will continue to appreciate. And, with forever to reach your financial goals, you'll eventually be richer than Bill Gates. But who cares, since everyone else will be just as rich?
I have to admit that if you said, "Think fast! Immortality: yes or no?," I'd probably say yes. But a 100-year free trial might be better.
Another area Kurzweil doesn't grapple with is the effect of immortality on those who believe in an afterlife -- which most people do. (I don't think I do, and I assume Kurzweil doesn't, which is probably why he doesn't examine it.) If you believe in life after death, why eliminate death? That just means you never get to Heaven -- or at least that you don't get there for an awfully long time. (Even an "immortal" being is going to come to an end at some point, either by being destroyed, assuming there's no brain backup handy, or just by the eventual destruction of the universe itself, which most cosmologists seem to think will happen at some point.)
I have to admit that if you said, "Think fast! Immortality: yes or no?," and demanded an answer right away, I'd undoubtedly say yes. But upon reflection, I'm not sure I have any desire to live forever -- except, possibly, because I'd like to see how things look in, say, 500 or 1000 years. (Or, since we're talking immortality here, a million or a billion, etc.)
From my current vantage point at age 41, where death is still presumably a ways off (if not as far off as it used to be), I suppose it's easy to dismiss immortality as something I don't want. I might be singing a different tune when I'm 80 or 90, or even 42! Maybe the best solution would be a 50- or 100-year free trial: Guaranteed life for that time period, after which you could decide whether or not to keep it forever.
If we only had a brain
The other unspoken assumption implicit in Kurzweil's thesis is that all of society's problems could be solved if only we were smart enough. Superintelligence, he argues, will be "the last thing humans will need to invent," because after machines surpass our intelligence, they'll take over the mental heavy lifting and will invent everything we need and dream up a solution to every problem we have. I'm not so confident.
If you assume superhuman intelligence, who's to say there won't be superhuman evil to go along with it? Imagine what would happen if Islamic extremists had superintelligence behind them; they'd have the means to blow us up by next Tuesday. On the other hand, I suppose our own superintelligent thinkers would be concocting ways of defending ourselves, and so on. But I see no reason to doubt that if we were able to produce "consciousness" and vast intelligence in an artificial mind, the potential for abuse and evil would be just as great as the potential for good. How could it be otherwise?
It took much longer than expected to develop chessplaying software that could beat the strongest human opponents. And a good game of chess is a mere bagatelle compared to what can be done by a mind that is truly "intelligent."
I think Kurzweil places far too much faith in the "exponential" nature of technological development. If you accept that everything is getting better and faster at an exponential rate, well, it's a certainty that the Singularity will occur in a fairly short time. But that's a big "if."
I remember an article in Scientific American in the early 1970s, about chessplaying computer programs. At that time, there were some fairly promising programs, but nothing had been developed that could beat a decently strong human opponent. David Levy, a Scottish Grandmaster (or maybe an International Master, I'm not sure -- but one of the better players on the world scene, at any rate) placed a wager of 1000 pounds that no computer would be able to beat him by 1978. The author of the Scientific American article opined that "Levy will have his hands full in 1978," or something to that effect. Not so; computers weren't even close by that time. They could beat me, but not David Levy or any other strong player.
Not only did it take much longer than expected to develop software that could beat strong humans, but even in 2004, the issue is still in doubt. As I noted a few columns back, former world champion Garry Kasparov has lost, then drawn, a couple of matches against the strongest software out there. The handwriting is probably on the wall; it won't be long before even Kasparov will get his clock cleaned by a strong program. But be that as it may, it took teams of developers much longer to get there than had been anticipated. And a good game of chess, impressive though it is, is a mere bagatelle compared to what can be done by a mind that is truly "intelligent," in any generally accepted sense of the term.
Kurzweil places too much faith in the exponential graph that illustrates the "Law of Accelerating Returns."
Kurzweil's right about a lot of things: Moore's Law (loosely stated, it argues that computing power doubles every 18 months, or every year, or thereabouts) certainly holds true, when it comes to conventional computing devices. But Kurzweil doesn't pay enough attention to the subtle role of brain chemistry in mental function. Emotions, mental illness, moods -- would artificial intelligence try to recreate them? The apparent answer would be yes, since Kurzweil talks mostly about replicating the human brain (which is a much more promising avenue than simply trying to develop algorithms and heuristics that would constitute "artificial intelligence" using conventional computers, which the AI community was trying to do for a long time).
Kurzweil and the futurists need to get out more. Take a walk in the woods. Go to the beach. Climb a mountain. Watch some birds. Meditate for an hour. (Would superintelligent beings still derive the same benefit a human can, by clearing his mind of all conscious thoughts, intelligent or otherwise?) Do they really think all of that stuff is going to go away -- or become irrelevant -- once we have enough computing power?
I can't help thinking of the scientists who looked at the promise of nuclear power, in the 1950s, and said, "This is great -- it'll make electric power so cheap, there'll be no need to meter it." Well, it was great when you only looked at the amount of uranium in the Earth and how much energy it contained -- but a lot of things happened along the way that those scientists didn't anticipate. The same thing is likely to happen when we try to make serious inroads, with technology, into the realm of human intelligence and ways to enhance or surpass it.
Could it happen?
Having said that, I'm not betting against the Singularity -- or some part of Kurzweil's vision of the future -- coming to pass at some point. A lot of very smart people believe that it's going to happen. And a lot of what Kurzweil talks about has already happened and is happening as you read this column.
However, I think he greatly underestimates the complexity of the task at hand and overestimates the universality of the Law of Accelerating Returns, as illustrated by the graph above. He places too much faith in the graph, when in reality, everything doesn't progress at the same rate. Some things develop quickly, others take longer, and some things just stay the same.
And I hate to sound like a Luddite, but even assuming the Singularity is possible, why does Kurzweil want it so badly? Superhuman intelligence and immortality would change our whole way of life, in ways we can't possibly imagine, and many of which might not be good.
I don't know about you, but I like the fact that people are impractical, unpredictable, and imperfect. It would certainly help if we were smarter, but would superintelligence conquer all? I doubt it.
Copyright © 2004 Kafalas.com, LLC
Feedback? Send us a
letter to the editor,
and we'll post it on the letters
page. Letters may be edited for clarity or length.