If the idea of a coming technological Singularity were a church, inventor and innovation maven Ray Kurzweil would be its most ardent and persuasive evangelist, and woe betide the unprepared heretic who crossed his path. For some, the Singularity—a point in our future history when artificial and enhanced intelligences will be the major drivers of progress and leave merely human imagination in the dust—is the much derided “rapture of the nerds” and Kurzweil is a brilliant man dreaming of immortality to avoid facing death. For some others, the Singularity is an inevitable outcome of our accelerating advances in A.I., biomedicine and nanotech, and Kurzweil is an intensely knowledgeable and foresighted expert who embraces what most of the world fears to recognize.
Presumably, there are also still others who fall somewhere between those camps, but they are not the people who usually write online. In feature articles, blog posts and comment sections about Kurzweil and his ideas, those highly polarized camps always seem to dominate.
Refreshingly, Carl Zimmer offers an antidote to the feverish extremes in an excerpt from his new e-book Brain Cuttings now posted at ScientificAmerican.com. “Can You Live Forever? Maybe Not—But You Can Have Fun Trying” is a balanced-but-not-wishy-washy, critical appraisal of the ideas that Kurzweil and other speakers tout at the annual Singularity Summit.
These two paragraphs neatly summarize Carl’s conclusions:
After the meeting I decided to visit to researchers working on the type of technology that people such as Kurzweil consider the steppingstones to the Singularity. Not one of them takes Kurzweil’s own vision of the future seriously. We will not have some sort of cybernetic immortality in the next few decades. The human brain is far too mysterious and computers far too crude for such a union anytime soon, if ever. In fact some scientists regard all this talk of the Singularity as a reckless promise of false hope to the afflicted.
But when I asked these skeptics about the future, even their most conservative visions were unsettling: a future in which people boost their brains with enhancing drugs, for example, or have sophisticated computers implanted in their skulls for life. While we may never be able to upload our minds into a computer, we may still be able to build computers based on the layout of the human brain. I can report I have not drunk the Singularity Kool-Aid, but I have taken a sip.
By all means, read Carl’s whole article. If you hurry, you might even be able to comment on it before most of the haters and fan boys arrive.
Carl’s perspective is pretty squarely what mine has been for some time: I don’t know whether to believe in the phenomenon of the Singularity as such, and I’m very skeptical of some of the specific technological assumptions that often go into it (such as uploading human minds), but monumental improvements in A.I., life extension and medicine, genetic engineering, nanotech, and mental and physical augmentation all seem quite certain. My greatest doubts surround Kurzweil’s highly optimistic timetable, which seems to call for us to achieve that time of miracles by 2050 or so. Nevertheless, I do have considerable respect and admiration for Kurzweil, who is truly a genius and who may be as well-versed in cutting-edge technology as anyone alive.
Perhaps my views would surprise some of those who have read my recent feature story in IEEE Spectrum, “Ray Kurzweil’s Slippery Futurism” (December 2010 issue). Part of why Kurzweil is taken so seriously when he talks about the Singularity is that he enjoys a reputation as a prescient seer of tech trends. In his books The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1998) he makes voluminous predictions about how the world will work by 2010 (and beyond). Today he maintains that the track record for his predictions to date is extremely good. I question that claim, however. To quote from my article:
On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable.
For example, Kurzweil is commonly credited with having foreseen the rise of the Web, yet I point out he made that prediction about widespread networked computing at a time when many others were making that same claim, and businesses if not whole industries were built on it. Similarly, he repeatedly predicted in 2005 and thereafter that “by 2010, computers will disappear”—referring, of course, to the spread of embedded microprocessors. But given that embedded microprocessors were already commonplace by that time, and no one doubted that the trend would suddenly stop, what did that grandiose claim really mean?
Read the article for my full argument. I had other examples in my original draft of the article that might be worth posting here if there’s interest, but I think my basic criticism will stand or fall based on what is in print in IEEE Spectrum.
Lo and behold: I started writing this post today because Carl’s book excerpt, posted yesterday, gave me a good occasion to bring up the subject of my own article. But just this afternoon, I learned that IEEE Spectrum has posted a response from Kurzweil to what I wrote. Excellent. I’ll be replying to his letter shortly, after I check a few things with the magazine’s editors, and probably posting my remarks both here and at IEEE Spectrum. Stay tuned.
Update (Xmas day): I’ve written my reply to Kurzweil, but the Christmas holidays seem to be interfering with my getting answers I need from IEEE Spectrum’s office (for reasons that will become obvious). So it looks like I’ll have to wait until early this coming week to post it. No hurry.
Update: My full reply is now available here.
Kurzweil, the Singularity and His Futurism by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.