Talk for Accelerating Change 2005, September 2005
At the moment, I have these html "slides" available at: www.rohan.sdsu.edu/faculty/vinge/misc/ac2005
Some common growth patterns
Specializing to the case of recent technological growth, we see
hardware improvements as lying atop the curves of individual
technologies.
If this goes on ...
Assume the exponential improvement in computation continues for
another few decades. Then what is the killer app?
The exact form is not clear (or rather, there are a number of
plausible forms -- many of which I expect will be discussed here at
AC2005!), but the essential change/app/technology is:
the development of creativity and intellect that surpasses
present-day humans.
Why call this transition the "Technological Singularity"?
- By analogy with the use of "singularity" in Math
- A place where some regularity property is lost
- Not necessarily a place where anything becomes infinite
- By analogy with the use of "singularity" in Physics
- A place where the rules profoundly change
- What comes beyond is intrinsically less knowable/predictable than before
- The apocalyptic endpoint of radical optimism :-)
Comparison with other radical changes
- With other inventions?
- The Printing Press
- Agriculture
- Fire
- With the notion that tech progress may become incomprehensibly
complex and rapid? (See S. Ulam, "Tribute to John von Neumann".)
- With the rise of humankind within the animal kingdom?
- Perhaps this is the closest analogy
- With the beginnings of all life on Earth?
What if the Singularity doesn't happen?
- Maybe Murphy's Law trumps Moore's Law, perhaps as:
"The maximum possible effectiveness of a software system increases
in direct proportion to the
log
of the effectiveness (ie, speed, bandwidth, memory capacity) of the
underlying hardware."
- The symptoms of this kind of failure:
- Large software projects failing
- Hardware demand not keeping up with Moore's Law
- Software insufficient to support radical hardware R&D
- As in the novel A Deepness in the Sky
- The future: legacy software ... thousands of years deep
- Laptop diving and software archeology
- Maybe catastrophe intervenes
- The more we learn about the cosmos, the more dangerous a
place it seems.
- In the short term, we humans are our own worst enemies -- and we're
all trapped in one very small place.
- In speaking of this awesome range of threat, Sir Martin Rees wrote
(in pp 7-8 of his book Our Final Hour):
"It may not be absurd hyperbole -- indeed it may not even be an
overstatement -- to assert that the most crucial location in space and
time (apart from the big bang itself) could be here and now."
My own conclusion: while the Technological Singularity is not at all a sure
thing, it is the most likely non-catastrophic scenario on the
horizon.
Of course, the Singularity itself could be catastrophic. What can we do to make
the bad versions less likely?
Singularity futures
Possible paths to the Singularity
- What if: AI (Artificial intelligence) research succeeds?
- I.J. Good, "Speculations Concerning the First Ultraintelligent Machine"
- Hannes Alfvén, The End of Man?
- What if: The internet itself attains unquestioned life and
intelligence?
- Gregory Stock, Metaman
- Bruce Sterling, "Maneki Neko"
- What if: Fine-grained distributed systems are aggressively
successful?
- Karl Schroeder, Ventus
- Vernor Vinge, "Fast Times at Fairmont High"
- What if: IA (Intelligence Amplification) occurs
- As the radical endpoint of Human/computer interface research?
- Poul Anderson, "Kings Who Die"
- Vernor Vinge, "Bookworm, Run!"
- As the outcome of bioscience research?
- Vernor Vinge, "Fast Times at Fairmont High"
- Vernor Vinge, "Win a Nobel Prize!"
Accelerating Change 2005!
Soft takeoffs versus hard takeoffs
How long will the transition through the Singularity take?
- Soft takeoff -- the complete transition takes years, perhaps even
with the exact beginning and end points a matter of debate.
- Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology
- Hans Moravec, Robot: Mere Machine to Transcendent Mind
- Charles Stross, Accelerando
- Hard takeoff -- the transition takes place in a very short period of
time, perhaps less than 100 hours, and without obvious precursors.
- Greg Bear, "Blood Music"
- Reasons for granting plausibility -- even likelihood -- to
the hard takeoff scenario
Hard takeoff as a Very Bad Thing
While there is plenty of reason to be nervous about changes as big as the
Singularity (consider the closest analogies!), I think there are many reasons
to be hopeful about such a thing -- if it happens as a soft takeoff (see Ray
Kurzweil and Hans Moravec references above).
On the other hand, it's very difficult to muster optimism about a hard takeoff:
- Onset so abrupt that it resembles a natural disaster (or a literal
explosion) more than a social/technological change.
- Onset precursors like the onset precursors of an avalanche or an
earthquake. An event quite possibly beyond any rational planning by
ourselves.
- Lots of fun stories here, but probably few you'd want to be a
piece of.
- Hard takeoff as a side effect of some random, innocent experiment
- Hard takeoff as a (perhaps inadvertent) marketing coup
- Hard takeoff as the chaotic climax of a military arms race
Trying for a soft takeoff
There are very good arguments that banning forms of research is an
exercise in futility. At the same time, there is an intuitive attractiveness
in IA, both by itself and in conjunction with the other possible
paths to the Singularity:
- IA is already undertaken by thousands of researchers --
including many people who don't even think about the Singularity.
- IA has the potential for allowing researchers to keep up
as events slide closer and closer to the edge of the runaway. In the event,
this might not slow the transition, but it could mean that some thoughtful
consideration and control might be given to ongoing events.
- IA and the Internet provide the potential for very large
numbers of thoughtful people to attend the ongoing complexity, with
the hope of providing some safe counsel.
Gotchas
My friend Mike Gannis has made a good case for fearing IA
(and I paraphrase): ~"We humans are naturally evolved creatures. We carry
around in the back our brains millions of years of bloody baggage. That cargo
may be unnecessary and suicidal in our present circumstances, but it is there.
Machines, designed de novo, could be much less destructively
inclined. In fact, if we go ahead with turning ourselves into gods, there is
only one person I would trust to be first.~" [At this point, Mike pats himself
on the chest.]
I think this is a valid gotcha (except for the last sentence :-). This danger
should figure in any analysis of IA. Two possibilities for
ameliorating this danger:
- Secret military IA research can't be banned, but public
research in the area should be much encouraged
- The more people who can benefit from ongoing IA research, the
better. This is not simply a matter of democracy and freedom; humanity as a
whole is already a greater-than-human source of wisdom. Enlisting that
resource is something that technology can do in an incremental and ongoing
way.
References
- H. Alfvén, writing as Olof Johanneson, The End of Man?,
Award Books, 1969. Earlier published as "The Tale of the Big Computer",
Coward-McCann, translated from a book copyright 1966 Albert Bonniers Forlag
AB with English translation copyright 1966
by Victor Gollanz, Ltd.
- P. Anderson, "Kings Who Die", If, March 1962, 8-36. The earliest story I know about intelligence amplification via computer/brain linkage.
- G. Bear, "Blood Music", Analog, June 1983, later expanded into
the novel Blood Music, Arbor House, 1985.
- I.J. Good, "Speculations Concerning the First Ultraintelligent
Machine", in Advances in Computers, vol 6, Franz L. Alt
and Morris Rubinoff, eds., 31-88, 1965, Academic Press.
(Thanks to Robert Bradbury, Good's essay appears to be online at
http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html )
- A. Johansen and D. Sornette, "Finite-time singularity in the dynamics of the
world population and economic indices", Physica A 294 (3-4), 465-502 (15 May
2001).
http://arXiv.org/abs/cond-mat/0002075
(Coming upon strangeness in the near future, but from a very different
direction.)
- R. Kurzweil, The Singularity Is Near: When Humans Transcend Biology, Viking, 2005
- H. Moravec, Robot: Mere Machine to Transcendent Mind,
Oxford University Press, 1999
- M. Rees, Our Final Hour, Basic Books, 2003. Many plausible,
terrible things that can happen without any singularity.
- K. Schroeder, Ventus, Tor Books, 2001.
- B. Sterling, "Maneki Neko", The Magazine of Fantasy Science Fiction, May 1998, reprinted in A Good Old-Fashioned Future, Spectra, 1999
- G. Stock, Metaman, Transworld Publishers Ltd, 1993.
- C. Stross, Accelerando, Ace, 2005.
- S. Ulam, "Tribute to John von Neumann", Bulletin of the American
Mathematical Society, vol. 64. no. 3, May 1958, pp. 1-49.
(John von Neumann using the term "singularity" with regard to tech
progress.)
- V. Vinge, "Bookworm, Run!", Analog, March 1966, 8-40. Reprinted
in The Collected Stories of Vernor Vinge, Tor Books, 2001. An early intelligence amplification story. The hero is the first experimental subject -- a chimpanzee raised to human intelligence.
- V. Vinge, "The Technological Singularity", http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
- V. Vinge, "Nature, Bloody in Tooth and Claw?", http://www-rohan.sdsu.edu/faculty/vinge/misc/evolution.html
- V. Vinge, A Deepness in the Sky, Tor Books, 1999
- V. Vinge, "Win a Nobel Prize!", Nature, October 2000,
reprinted in The Collected Stories of Vernor Vinge, Tor Books,
2001.
- V. Vinge, "Fast Times at Fairmont High", The
Collected Stories of Vernor Vinge, Tor Books, 2001.