SINGULARITY ANALYSIS
A Series of
Educated Guesses
Copyright
©1998 by Eliezer S.
Yudkowsky. All rights reserved.
This document may be copied, mirrored, or
distributed if: No text is omitted, including this notice; all
changes and additions are clearly marked and attributed; a link to the
original page
is included. Small sections may be quoted with attribution and a
link.
This page is a series of educated guesses about the nature of life
after Vernor Vinge's Singularity. I regret that this page is written
for people who are familiar with the issues discussed, to the extent of
staking a personal position or consciously refusing to do so.
The "educated" part of these guesses is based on my This page began as a response to issues raised in the The complexity barrier and the simplicity
barrier.In response to: Damien Sullivan, Bostrum, Hanson
, More, Nielsen. Mostly summarizes
Coding's
sections on goals. Some arguments.Necessary to item 4. A
canonical list of shields from the Singularity.In response to: Vernor Vinge , Nielsen.Trajectory
analysis: Does human AI imply superhuman AI? In response to:
Max More, Hanson.
1. AI:
Human-equivalence and transhumanity.
Quoting Max
More:
"Curiously, the first assumption of an immediate jump from human-level
AI to superhuman intelligence seems not to be a major hurdle for most
people to whom Vinge has presented this idea. Far more people doubt that
human level AI can be achieved. My own response reverses this: I have no
doubt that human level AI (or computer networked intelligence) will be
achieved at some point. But to move from this immediately to drastically
superintelligent thinkers seems to me doubtful."
This was the best objection raised, since
it is a question of human-level AI and cognitive science, and therefore
answerable. While I disagree with More's thesis on programmatic
grounds, there are also technical arguments in favor. In fact, it
was my attempt to answer this question that gave birth to Coding a
Transhuman AI. (I tried to write down the properties of a seed AI
that affected the answer, and at around 3:00 AM realized that it should
probably be a separate page...) The most applicable sections are:
"The AI is likely to bottleneck at the
architectural stage - in fact, architecture is probably the Transcend
Point; once the AI breaks through it will go all the way." In a
nutshell, that's my answer to Max More. Once the seed AI understands
its own architecture, it can design new abilities for itself, dramatically
optimize old abilities, spread its consciousness into the Internet,
etc. I therefore expect this to be the major bottleneck on the road
to AI. Understanding program architectures is the main requirement
for rewriting your own program. (Assuming you have a
compiler...) I suppose that the AI could still bottleneck again,
short of human intelligence - having optimized itself but still lacking
the raw computing power for human intelligence.
But if the AI gets up to human equivalence, as Max More readily grants,
it will possess both human consciousness and The AI Advantage.
Human-equivalent intelligence, in the sense of programming all human
abilities into an AI, isn't human equivalent at all. It is
considerably on the other side of transhuman. As discussed in that
section, human high-level consciousness and AI rapid algorithmic
performance combine synergetically:
"Combining Deep Blue with Kasparov ... yields a Kasparov who can wonder
"How can I put a queen here?" and blink out for a fraction of a second
while a million moves are automatically examined. At a higher level
of integration, Kasparov's conscious perceptions of each consciously
examined chess position may incorporate data culled from a million
possibilities, and Kasparov's dozen examined positions may not be
consciously simulated moves, but "skips" to the dozen most plausible
futures five moves ahead."
Similarly, a programmer with a codic cortex
- by analogy to our current visual cortex - would be at a vast advantage
in writing code. Imagine trying to learn geometry or mentally rotate
a 3D object without a visual cortex; that's what we do, when we write code
without a module giving us an intuitive understanding. An AI would
no more need a "programming language" than we need a conscious knowledge
of geometry or pixel manipulation to represent spatial objects; the
sentences of assembly code would be perceived directly - during writing
and during execution. The AI Advantage would allow mid-level
observation of the execution of large segments of code - debugging,
assuming that any errors at all were made, would be incredibly easy.
Programming at this stage is likely to be considerably faster and better
than the human programming that gave birth to the AI, leading to a sharp
surge in efficiency and a concomittant increase in intelligence (see below
for the trajectory dynamics).
I say all of this not to advocate either unknowability or
Singularity-worship, but to emphasize just how powerful human
intelligence is in the hands of an AI. For this reason, I don't
think the trajectory will bottleneck at this point. I think it will
bottleneck earlier. An AI without architectural understanding can
only optimize optimizers; an AI with architectural understanding can write
new abilities or new architectures or outright better AIs. EURISKO's
great acknowledged lack was the ability to program new domains, so there's
a historical precedent for bottlenecking at that point. My audacity,
if anything, lies in claiming that this is the last bottleneck on
the way to Transcendence - and I attach no great force to that statement;
it's just a guess. But a human/AI hybrid, siliconeural or
architectural, is so powerful that I seriously doubt a bottleneck will
occur at thatpoint. Additional bottlenecks are likely to
occur earlier.
But. Max More, as I said, could be right.
While the self-enhancing trajectory of a seed AI is complex, there are
surface properties that can be quantitatively related: Intelligence,
efficiency, and power. The interaction between these three
properties determines the trajectory, and that trajectory can bottleneck -
quite possibly exactly at human intelligence levels.
In the short term, the seed AI's power either remains constant or has a
definite maximum - even if that maximum is the entire Internet.
Power only increases after months of Moore's Law, and only increases
sharply when the AI's intelligence reaches the point where it can advance
technology; improve circuit design or aid nanomanufacturing or solve the
protein folding problem.
Power and efficiency determine intelligence; efficiency could even be
defined as a function showing the levels of intelligence achievable at
each level of power, or the level of power necessary to achieve a given
level of intelligence. Efficiency in turn is related in a
non-obvious but monotonically increasing way to intelligence - more
intelligence makes it possible for the AI to better optimize its own code.
The equation is complete; we are now ready to qualitatively describe
the AI's trajectory. Each increment of intelligence makes possible
an increment of efficiency that results in a further increment of
intelligence. When the increments of efficiency are very small, with
the result that the series sums to zero (or since it's quantized, actually
stops at a given point) - when the derivative of de/di is close to zero,
when the slope of efficiency plotted against intelligence flattens out,
the AI is said to have bottlenecked. It may be possible for a
continually churning AI to overcome the bottleneck by brute time (like a
punctuated equilibrium in evolution), widening and deepening searches so
that an effective tenfold increase in power is traded for a tenfold
decrease in speed, at least until the bottleneck is passed. But if
the bottleneck occurs for deep reasons of architecture, this may not
suffice.
Likewise, the basic hypothesis of seed AI can be described as
postulating a Transcend Point; a point at which each increment of
intelligence yields an increase in efficiency that yields an equal or
greater increment of intelligence, or at any rate an increment that
sustains the reaction. This behavior of de/di is assumed to carry
the seed AI to the Singularity Point, where each increment of intelligence
yields an increase of efficiency and power that yield a
reaction-sustaining increment of intelligence. Depending on who you
ask, this reaction continues forever, reaches infinity after finite time,
or dies out at some level that is unimaginably far above human
intelligence. (If you don't have at least the latter, it's an
Inflexion Point, not a Singularity Point.)
Two interesting points immediately arise. First, the Transcend
Point almost certainly requires a basic minimum of power. In fact,
the amount of raw power may exert as much influence on the trajectory as
all the complexities of architecture. While a full-fledged Power
might be able to write a Singularity-capable program that ran on a Mac
Plus, it is improbable that any human or seed AI could do so. The
same may apply to other levels of power, and nobody knows how.
Tweaking the level of power might enable a bottleneck to be imposed almost
anywhere, except for a few sharp slopes of non-creative
self-optimization. The right level of limited power might even
create an actual transhuman bottleneck, at least until technology
advanced... although the transhuman might be very slow (Mailman), or a
hugeincrease in power might be required for any further
advancement. Or there might be sharp and absolute limits to
intelligence. (I must say that the last two possibilities strike me
as unlikely; while I have no way of peering into the transhuman
trajectories, I still see no arguments in support of either.)
We now come to Max More's point. It so happens that all humans
operate, by and large, at pretty much the same level of
intelligence. While our level could be coincidental, it could
also represent the location of a universal bottleneck. If one is to
guess where AIs will come to a sudden halt, one could do worse than to
guess "the same place as all the other sentients". Truly I am hoist
on my own petard, for was it not I who formulated Algernon's Law?
Phrased in this way, backed by the full force of evolutionary dynamics,
Max More's hypothesis begins to sound almost inevitable.
But. Max More, as I said, could be wrong. In short, the brain doesn't self-enhance, only self-optimize a prehuman
subsystem. You can't draw conclusions from one system to the
other. The genes give rise to an algorithm that optimizes itself and
then programs the brain according to genetically determined architectures
- this multi-stage series not only isn't self- enhancement, it
isn't even circular.
Also, human intelligence isn't all that hard to enhance. Human
intelligence can be very easily enhanced simply by allocating more neurons
to a particular cognitive module. (Inflection: Personally
assigned high probability, personally witnessed evidence, difficult to
formally prove without further experimentation.) While the
neural-level programming algorithm generally doesn't invent high-level
architectural changes as a result, it is capable of properly allocating
and programming more computing power, probably to a level at least double
the usual quantity. And the established efficiency of the neural
programmer is enough to produce major improvements, given the extra
neurons.
If you stuff a few more neurons into an ability, you get a considerable
improvement in speed, a wider search, a big decrease in perceived effort
to use the ability, and - most resembling a true improvement in smartness
- a finer perceptual granularity. This last allows to
near-instantaneous reduction of a concept to its constituents, some
ability to think in terms of those constituents, and even skills or
concepts that operate on the constituents directly and thus have a much
wider applicability and a lot more elegance. That's one way an AI
can outthink the combined experience of six billion humans; while it may
have less experience than any single human, that experience has been
perceived, and abstracted into skills, on a much finer level. It
doesn't help to have six billion humans knowing a trillion words, if the
AI has only heard a hundred words but has successfully formulated the
concept of "letter".
And while AIs are not humans, Deep Thought does demonstrate that you
can get qualitative improvements by dumping enough brute-force computation
into a simple search tree. The human-equivalent AIs will undoubtedly
contain search processes into which lots of computational power can be
dumped, with much the same effects as neural reallocation in humans.
Human brains don't keep growing - there isn't an evolutionary advantage
gained sufficient to offset the evolutionary disadvantage of feeding the
extra neurons. (The brain uses about 20% of the body's ATP, I
believe...) Over the past few hundred thousand years, the brain has
actually shrunkas improvements in efficiency decreased the power
required for the local evolutionary optimum.
But. Max More, as I said, could be right.
If I suddenly gained the ability to reprogram my neurons, could I
Transcend? I mean internally, using nothing but the brain's
resources? I don't know. I don't know if I could optimize
enough to free up extra power for new domains or big increases in
intelligence. Could I optimize at all? The neural-level
programmer could be a better programmer, or optimization could require the
85% of our neurons that die off during infancy. I'd have no
gigahertz processors, just a few hundred billion 200-hertz neurons, so
only partial aspects of The AI Advantage could be simulated. I also
think I'd accidentally cripple myself before reaching takeoff. But I
could be wrong, and also I'm off-topic.
The point is - how much raw power does it take to create a seed
AI? (This is the converse of the usual skepticism, where we allow
that Moore's Law gives us all the power we want and question whether
anyone knows what to do with it.) It could take a hundred times the
power of the human brain, just to create a crude and almost unconscious
version! We don't know how the neural-level programmer works, and we
don't know the genetically programmed architecture, so our crude and
awkward imitation might consume 10% of the entire worldwide Internet
twenty years from now, plus a penalty for badly distributed programming,
and still run at glacial speeds. The flip side of that inefficiency
is that once such a being reaches the Transcend Point, it will "go all the
way" easily enough - it has a hundred times human power at its
disposal. Once it reaches neural-programmer efficiencies, its old
intelligence only occupies 1% of the power available to it - and by the
time the newly available power has been used up, it has probably reached a
new level of efficiency and freed up more power, and also gained the
ability to create nanotechnological rapid infrastructure.
If, on the other hand, human programmers are more efficient than
the neural-level optimizer, then the seed AI might have human-equivalent
ability on a tenth of the power - perhaps running on the 'Net today, or on
a single supercomputer in twenty years. And by "human-equivalent" I
do not mean the way in which I originally interpreted Max More's
statement, "full human consciousness plus The AI Advantage". I mean
"partial human consciousness, which when added to The AI Advantage, yields
human-equivalent ability". Such a seed AI wouldn't have access to
additional power, and it might not reach any higher efficiencies than that
of its creators, so its intelligence might remain constant at the human
level. If the intelligence/efficiency/power relation is exactly
right, the seed AI could remain unflowering and unTranscendent for years,
through two or three additional doublings of power. It will,
however, break through eventually. I think ten years is the upper
limit.
To summarize: First, if a seed AI reaches human equivalence, it
has programming ability considerably beyond what's required to enhance
human-level abilities. Second, there are sharp differences between
seed AI power and human power, seed AI efficiency and neural-programmer
efficiency, and different efficiency/power/intelligence curves for the
species. My estimated result is a bottleneck followed by a sharp
snap upwards, rather than a steady increase; and that the "snap" will
occur short of humanity and pass it rapidly before halting; and that when
the snap halts, it will be at an intelligence level sufficient for rapid
infrastructure.
The canonical list of reasons why
superintelligences would not interact with humanity, or would interact to
a limited extent, or would act to preserve our current reality. Ways
both plausible and science-fictional to have human and transhuman
intelligence in the same world/novel.
I abbreviate "superintelligence" to "SI" and "Post-Singularity Entity"
to "PSE".
3. Superintelligent Motivations.
Relevant sections of Coding a Transhuman
AI:
If you've read all of these sections,
there's not all that much else to say. Up to some unknown point, AIs
will act on Interim goals. After that, SIs will act on External
goals - they'll know, not hypothesize, some nonzero goal, and act on
that. I don't have the vaguest idea of what this might be. The
qualia of
pleasure are
probably the most plausible candidate, or at least the most plausible
argument that objective nonzero goals exist.
As far as I can tell, there are only three real questions about SI
motivations.
If (1) but not (2), we're dead. If
(1) and (2), we either turn into PSEs or stay humans forever, whichever is
more efficient. (Or perhaps only the "valuable" part of us will
remain...) If (2) and (3), we turn into PSEs. If (2) but not
(3), we stick around in Permutation City until we grow up. If
neither (1) nor (2), we probably get all the capacity we want anyhow on
the theory that it encourages Singularities, or else we just get left
alone with our atoms. In short, the basic interplay between these
three motivations determines our survival and/or godhood.
I am pleased to announce that I see excellent arguments in favor of
both sides of all three questions, which arguments change on a monthly
basis, so I'm not going to bother mentioning them. The truth is that
after a deep and full consideration of the issues, I've concluded that I
don't have the vaguest idea. I don't need one. I've got
Externalism, the Interim Goal System for humans.
No, your pet motivation isn't going to make it.
Most discussions of SI motivations involves some particular set of
assumptions that the speaker wants to export. Even I've been guilty
of this; I said that the Powers would be ethical, but I didn't realize
that I hadn't the faintest idea of what "ethical" meant, or that the
ultimate good could mean rendering down all of humanity for spare
atoms. If you've read the above sections, you understand that
emotions, arbitrary goals, coercions, oaths, and so on are all quite
fragile, breaking under the least application of transhuman intelligence.
If you think that sounds reasonable, you can
It's a matter of logical support, relative stability under changing
architectures, and resonant doubt. To put it bluntly: Truths
are more stable under superintelligence than lies, and questions of
morality are questions of truth. Let's suppose that an AI has a set
of arbitrary „oals and a set of rational goals. Which will the AI
choose? Which would you choose? Which would a
superintelligence choose? As the mental architecture changes, as the
worldview changes, as new goals are processed, as new modules are
coded... Which set of goals tends to cancel itself, and which set
reinforces itself? Which set of goals is coherent with with all the
other modules? All it takes is a single stage where the arbitrary
goals fade out, and like all pieces of circular logic, they won't come
back. Logical goals are simply stronger than arbitrary goals; they
have deeper justification, better correspondence and reinforcement with
reality and the system, more stability under changes of architecture.
I can't visualize an exact cutoff point at which it becomes not merely
likely but certain that the logical goals will win out. But I
would give an 85% chance that there is one, at least for
humanly-designable seed AIs, and a "near-certain" probability that the
logical goals willwin even if losing isn't impossible. Even
in human terms, there is a long tradition of slightly-more-intelligent
humans substituting their own goals for the evolutionary ones; arbitrary
goals aren't internally consistent, either. I, for one,
wouldn't hesitate in the slightest to lose the arbitraries; I've already
gone as far along that path as I can.
I'm not saying that our future selves or inheritors are going to be
emotionless robots, or stereotypical Vulcans. What I'm saying is
that our current minds are partitioned, into emotion and logic, due to
evolution and limited intelligence. I don't know that either
module will survive, but if they do, they will be synthesized or
partitioned along utterly different lines. The emotional system and
the truth-seeking system evolved separately under different pressures; one
to accomplish a set of short-term goals, the other to determine
truth. Inevitable, especially with evolution prosecuting its own
utterly nonrational goals. But what kind of godforsaken foolhardy
sadist would deliberately do such a thing to an AI? And why would
any superintelligence tolerate it in itself?
Some people claim their goals will persist on the Other Side of
Dawn. They are claiming that an unstable, internally inconsistent
system, not designed for changing architectures or superintelligence,
which has been repeatedly demonstrated to be unstable with even slight
increases in intelligence, which has no deep justification but is simply
imposed... will come into direct conflict with a set of superintelligent
goals, and will win out so completely, at every point along the changing
architecture, that there is no chance for resonant doubts to build
up. That's what's required for their future selves to enjoy
chocolate. I think the futurian chocolateers are simply engaging in
wishful thinking, ungrounded speculation, and spectacular failures of
imagination, not advancing a point of computer science.
4. Unpredictability: Evaporation of the human
ontology.
There are two fundamental barriers to
understanding the Singularity: Complexity, and simplicity.
"The complexity barrier" refers to the existence of processes which are not
understandable by the human brain. "The simplicity barrier" refers to what I have called the "evaporation of the causal
network", or the obsolescence of the basic assumptions that make up the
ontology we call Life As We Know It.
To do the largest, most interesting amount of computing with the least
amount of code, the code must have maximal density; any regularity in the
code should be encapsulated in a smaller piece of code. When Douglas
Hofstadter's Luring Lottery accidentally started a contest to come up with
the highest number:
"Dozens and dozens of readers strained their hardest to come up with
inconceivably large numbers. Some filled their whole postcard with
tiny '9's, others filled their cards with rows of exclamation points,
thus creating iterated factorials of gigantic sizes, and so on. A
handful of people carried this game much further, recognizing that the
As code becomes more powerful, more optimized, more intelligent, it
also becomes harder to understand. (I of course could have won both
contests by entering T(1024) * 0 - see
The neural programmer, the genetically directed self-wiring of brain
circuitry, is not linked to our own intelligence. We don't wire our
own brains. While an increase in brain size may result in a higher
effective intelligence for the programmer, either the
power-to-intelligence or the intelligence-to-efficiency conversions could
be flat at that point. In short, our brain circuitry is not wired by
human-equivalent intelligence; it is wired by an entirely different
intelligence operating at an entirely different level, and it is
there, if anywhere, that the bottleneck lies. And since I'd
be surprised if the neural-level programmer could not optimize algorithms,
and amazed if it could invent architectures, it looks like the same old
Architectural Bottleneck to me.
above); the
only reason why this gets a "slightly" rating is the possibility that
humanity will arrange to do it this way deliberately. I think that
such synchronization would be broken by cheaters,
however.
Our Universe is both inhospitable to PSEs and easily escapable.
Any SI immediately "leaks out", perhaps leaving a few Transcendent
artifacts behind, but still leaving an untouched world to
humanity. Note that this incorporates Bostrum's noncompetitive
ecology.
Result: Marooned in Realtime, possibly with some interesting toys
added.
Once an SI reaches the level of intelligence where it becomes certain
that all goals have zero value, the Interim Goal System collapses and
the SI becomes quiescent. (I accept this possibility, but I don't
worry about it while the probability isn't 100%. For obvious
reasons, it cancels out of distribution-of-effort calculations.)
Result: Who cares?
The Great Pan-Cosmic Mortal/Singularity Mutual Support Compact states
that the PSEs donate a quintillionth of the available capacity to the
race that created it, on the game-theoretical strategy that one in a
quintillion races is in a position to verify the actions of past PSEs
before entering their own
Singularities.
Result: Permutation City. This ends when a human becomes
intelligent enough, either to join the Singularity (human life
meaningful, Compact guarantees free choice) or to commit suicide (human
life meaningless, Compact guarantees survival).
Humans do get upgraded, but beyond a certain point of superintelligence,
nothing remains of the old personality. If there's an infinite
supply of computing power and memory, the old personality might be
archived. Various levels of our own selves might be "archived" as
continuing, active programs - ranging from our current selves, to the
highest level of intelligence attainable without completely dissolving
the personality. Hundreds, even millions, of versions might wander
off into strange realms of cognitive self-alteration, but the "you" who
first greeted the Final Dawn would always be around as a
backup.
Result: The Culture meets A Fire Upon The Deep in Permutation
City. Probably the most fun place to be from a human's perspective
- life is good, life is meaningful, and there are gods to talk
to.
Writer's note: If you want to toss a snake in the
science-fictional Eden, you can have the maintaining PSEs suddenly leak
out, and leave the humans and transhumans and SIs and Powers fighting
for control of a disintegrating world.
Our world was created by God or a PSE, not as an interim method with a
definite end, but as a continuing fulfillment of the ultimate
good. (I think this is incompatible with all major
religions. Even Buddhism ends when all souls reach Nirvana.)
This idea's sole attraction is "explaining" everything about humanity
without reference to the Anthropic Principle - if intelligence fills the
Universe with what it deems good, and if the ultimate good is thus the
most common and stable state, wherever we are is probably the ultimate
good. I don't buy it, but if so, the SIs would shut up, ship out,
shape up, or shut
down.
Result: Nothing happens.
In another variation of the above theory, our world is actually a
computer simulation. Perhaps it's mortals trying to find out if
transhumans can be trusted, or perhaps it's transhumans trying to find
out something else. Either way, a Singularity might not be
permitted. Some readers may upgrade "slightly plausible" to "low
probability" for statistical reasons - there would be many
simulations per mortal or transhuman simulator, raising the probability
that a randomly selected sentient is in
one.
Result: The simulation is terminated, although the inhabitants
(us) may wind up Elsewhere...
This is Vernor Vinge's original ad-hoc method of putting mortals and
Powers in the same story. With wonderful audacity, Vinge simply
rules that Transcendent thought can't occur except on the fringes of the
galaxy.
If I had to rationalize a Zone Barrier, I would say that the Cloud
People at the center of the galaxy "use up" all of the "ontological
substratum of thought" (known as eganite). The Zones
actually are a superintelligent entity, whose unbelievably
intelligent center is in the Unthinking Depths, where all the eganite is
used up and nobody else can think at all, and whose fringes finally
peter out in the High Transcend. After ten years, Powers figure
out how to politely join the Cloud People and vanish. The Blight
was stealing eganite, which is how it could knock off Old One and reach
into the Beyond. Countermeasure either got the Cloud People to
shift their thinking and smother the Blight, or else suck most of the
eganite out of that
quadrant.
Result: A Fire Upon The Deep, of course. I do not see how
this would happen outside of science fiction.
David Brin postulates a weaker form of Zone Barrier, one which is not
based on an absolute prohibition, but rather the desires of the SIs. As entities mature and become more intelligent, they
increasingly prefer to be close to large tidal forces, sharp
gravitational gradients. Most races eventually leave the hectic
galactic mainstream, becoming part of the Retired Order of Life, in
gigantic Criswell structures (fractal Dyson spheres) around suns.
Millennia or eons later, they finally feel ready to join the
Transcendent Order of Life, moving up to neutron stars and the fringes
of black holes, and eventually diving into the singularities, beyond
which... nobody, even the Transcendents, knows.
In the best traditions of Zoning, Brin doesn't even try to explain why
this is so. (I rather liked the combination of literal and Vingean
Singularities, though. But I really don't understand why novels
with Galactic Zones must include a backwater world full of primitive
aliens; I found both the hoons and the Tines boring by contrast with the
Transcendents.)
Given the wide range of astronomical phenomena, it is at least slightly
plausible that some spatial regions will be preferred to others. I
can't see much interaction with Transcendents on the fringe - cases
where we have something They want would be very rare
indeed.
Result: Heaven's Reach.
All of humanity gets upgraded simultaneously and synchronizedly,
and the dynamics of intelligence are such that it takes at least a
thousand years of subjective time to create a seed AI. That's not
necessary (see human-to-transhuman
Result: The Gentle Seduction (Marc Steigler).
Somebody gets "strong" nanotechnology, full-scale diamond drextech, way
ahead of everyone else - enough to take essential, unilateral control
over the material existence of the world. There are all kinds of
dystopian scenarios, but the Zone Barrier is that the Aristoi (nanolords) could do a slow, synchronized neurological enhancement of
humanity. Essentially the same as above, but in material reality,
and slightly less vulnerable to cheating during the early stages.
The Pause is if the Aristoi confiscate all advanced technology (except
the nanotech utilities) and declare a Utopian holiday for the next
millennium, during which nobody ages, nobody Transcends, and humanity
basically catches its breath before continuing. Fun, but I don't
think nanotechnological programming is easy enough for that kind of
unilateral omnipotence. Counts as a Zone Barrier because
limited intelligence enhancement could be allowed during The
Pause.
Result: Aristoi.
As discussed in the earlier
section, it is entirely possible that a Major Bottleneck will appear at
almost any point along the trajectory to superintelligence. I feel
that such bottlenecks will be rare in the vicinity of human
intelligence, and that there are immediately obvious fast-infrastructure
technologies (i.e. nanotech and quantum computing) soon beyond it.
I could be wrong, however, in which case the Mildly Transhuman beings -
perhaps running on amazing computer power at amazing speeds with
gigantic minds, but with basically human smartness and personality -
will stick around doing
God-knows-what.
I rate this as improbability verging on blasphemy, a final Failure of
Imagination. Such beings in SF are no smarter than Kimball
Kinnison. This is particularly disappointing when it is used, not
to set up a world, but to finisha novel that could just as easily
end in
Singularity.
Result: Mother of Storms.
Pretty much as above - just a different excuse for not doing anything
interesting with the so-called transhumans. One might call it
"humans with pointy brains", by analogy to Star Trek's apotheoses of bad
aliens.
Sorry, Barnes, it was otherwise a good book, but Result: Mother of
Storms again. Since nobody has seen a transhuman intelligence,
it's superficially plausible that it can't exist. Entities like me
have sketched out dozens of fun things to do with lots of computing
power, but hey, so what? This Zone Barrier doesn't even explain
the Fermi Paradox. Bleah.
There is an old stereotype, to the effect that when one Attains Wisdom,
one immediately subscribes to a principle of noninterference with the
lives of others, helping only those who request your help, and so
on. Lord knows, I fully understand the impulse to become a hermit
on some high mountain and refuse to talk to anyone unless they shave
their head as a token of sincerity. One can visualize the Powers
interacting in ordinary society and posting to mailing lists, but it is
not easy. I would categorize it as a Failure of
Imagination.
If Bostrum's theory of ecological noncompetition is correct (note that
"leakage", above, constitutes moving to another ecological niche) it is
possible that the PSEs will stick around on Earth, with brains extending
into an infinite supply of eganite. In other words, noncompetitive
coexistence. In such case, one tends to assume that either the
PSEs care about humanity (have humanity-related goals) and remake the
world accordingly, or they don't care at all and pay no attention - with
much the same effect as "leakage", except that they are still
technically present. I don't see an alternative that would allow
the PSEs to play at helping-hand and laissez-faire, except for a form of
the Compact above. After all, nervous races might not want to be
uploaded at all, even to identical forms. But at that point one
starts running into the Fermi Paradox
again...
Result: Mother of Storms.
The PSEs have no use for humans; they grind us up for spare atoms.
But, we have immortal souls. At this point, depending on your
assumptions, we either go to Heaven, wander as sentient discarnate
entities, or float around as unthinking pearls of consciousness -
hopefully not eternally reliving our last moments - either forever, or
until some improbable race picks us
up.
I know that some of my readers will react to my listing of this
possibility with the same serenity Curly exhibits when Moe pokes him in
the eyeballs, but it's a Zone Barrier, so it goes on the list.
"You're the great expert on Transcendent Powers,
eh? Do the big boys have wars?" -- Pham Nuwen, A Fire Upon The
Deep.
There may be more than one ultimate good. It is even possible that
PSEs go down a number of irrevocably different paths, winding up in a
number of basic and basically opposed classes. It is also possible
that except in their home regions, the PSEs galactic efforts cancel out
entirely - it is easier to abort an effort than make it, so all the PSEs
abort each other's efforts down to
nothing.
The Zone Barrier part of this is as follows: Each PSE wants Earth
to go down its own path, but acts to prevent it from going down any
other path. Under natural circumstances, a Singularity-trigger is
a single event of low probability, but with many possible tries -
consider how much Einstein advanced technology, and consider how many
possible-Einstein brains there were. But since such
low-probability events are easy for a PSE to irrevocably disturb, the
result is that there are no geniuses and no lucky breaks, but also no
Hitlers and no nuclear wars. Technology keeps crawling slowly
upward, through a long Slow Horizon, until Singularity becomes
inevitable.
This Universe is one I invented for the purpose of getting Earth
involved in a cosmic battle - for some reason we get Einsteins
and Hitlers - but on reflection the basic theory might also apply to the
Culture of Iain M. Banks, Paradox Alley by John DeChancie, or Heaven's
Reach. Very good for the "Not only the gods intervene, but THEIR
gods intervene, and then THEIR gods intervene"
fuse-the-reader's-synapses method of rising tension.
Maybe, in despite of everything I know on the subject, PSEs can still
wind up with essentially arbitrary goals, perhaps even goals programmed
into them by humans. In accordance with the Prime Directive, I
warn everyone that this is totally improbable and incredibly dangerous
and must not be tried. But if so, the world could become a
strange place - an unimaginative person's childhood fantasy of
omnipotence if the original goals persisted, or an utterly peculiar
place of incomprehensible magic if the original goals twisted and
changed.
optimal solution avoids all pattern
(to see why, read Gregory Chaitin's article "Randomness and Mathematical Proof"), and consists
simply of a "dense pack" of definitions built on definitions, followed
by one final line in which the "fanciest" of the definitions is applied
to a relatively small number such as 2, or better yet, 9.
[...] As it turns out, I don't know who won, and it doesn't
matter, since the prize is zero to such a good approximation that even
God wouldn't know the difference."
Damien Sullivan is a skeptic on this
point. Human brains are general-purpose computers, and should be
able to understand any Turing-computable process - if necessary, by
simulating it with a very sharp pencil and a very large piece of
paper. That we would die of old age a billion times over makes no
difference; the process is still humanly understandable. In other
words, Damien Sullivan interprets Turing computability to mean that a
finite level of intelligence suffices to understand anything.
This is a concisely stated mathematical theory. Mathematically
speaking, it is flat wrong. (No offense to Sullivan, of course;
"flat wrong" has always been infinitely superior to "not even wrong".)
The most obvious counterdemonstration is to consider a
Turing-computable process whose output cannot be produced by any Turing
machine with less than a quintillion states. (This includes Turing
machines which simulate Turing machines and so on.) While we, given
an infinite amount of external storage and time, could simulate the action
of this quintillion-state Turing machine, we could never comprehend its
behavior. The brain would not be capable of representing the global
behavior, because the global behavior involves 1018 states and
the human brain only has 1014synapses. The brain could
only comprehend local subprocesses of trivial size, and would never be
capable of representing - much less noticing, understanding, or inventing
- the properties of the global process.
In short, the brain would no more understand the program it runs
than silicon atoms understand a PowerPC chip. While it is
possible to have silly philosophical fights over whether simulation
constitutes understanding, anyone who has read Coding a Transhuman AI
understands that specific computational processes are required for the
cognitive understanding of a representation, over and above the
representation itself, as is discussed in RNUI: Representing,
Noticing, Understanding, and Inventing.
My mathematical response to Sullivan's mathematical claim: The
human brain can only represent a limited amount of information.
While the actual Singularity might play around with quintillion-state
information, even I deem that questionable. My cognitive response to
the cognitive claim is that a sufficiently smart posthuman can write short
programs which are still incomprehensible. Programs of such density
that the human brain, while capable of representing the code (if not the
run) is nonetheless not capable of representing RNUI facts about the
code. The facts about the code would be too large for abstract
understanding, and the human brain would not have any intuitions to help.
For that matter, there might be hard limits on the density of the
abstract models that the human brain can understand. If there are
too many interacting properties on too many levels in too many dimensions,
the brain may be unable to construct or represent the abstract model -
even if that model has only a few dozen components. In other words,
while low-level Turing-computable processes can always be visualized a
piece at a time, the high-level description of a process can defeat our
cognitive capacities. A dog might be trained to push Turing tokens
around without understanding them; even if I simulated all your neurons by
hand, I would probably never catch a glimpse of your high-level thoughts.
The mathematical aspect of Turing-computability is irrelevant; the real
question is the maximum causal density our RNUI abilities can
handle. The flip side of this question is whether the
human-observable behavior of the PSEs (either from inside Permutation
City, or from the perspective of 1998) will have a high density, or obey a
few simple rules that anyone can understand. I might even bet on the
latter, because as discussed in The simplicity barrier (which I referred to in my Comment) is
foundation loss, or the evaporation of the causal network. Most of
the complexity in human life ultimately derives from the way things
happened to work out. If a superintelligence started over from
nothing, none of the foundations would still exist, much less the
elaborate structures built on them. Even given the assumption -
which to me seems like wishful thinking - that the process of Singularity
will tend or desire to preserve all structures and foundations, there will
still be fundamental changes in the fundamentals. (I am speaking of
how it looks from inside the Singularity, so A common cognitive fault is asking how the Singularity will
alterour world. To deal with the unknown, we export our
default assumptions and simulate the imagined change. Even code -
code being the most complex thing humans can control absolutely - is
altered, not rebuilt from scratch, by a major revision. The
programmers don't have the patience to rewrite everything ab initio
- although I personally feel this should be done every once in a while, if
you have unlimited time, patience, and money. Even if all the old
code is lost in a Windows-related catastrophe, memories of old code will
still affect how the new code is constructed - although the experience
gained is likely to result in a tighter, deeper architecture, which may
obsolete entire reams of old code and change all else drastically.
In short, even the programmers, gods in toy Universes, alter things
rather than remaking them from scratch. So too with fiction, a realm
whose masters are almost as omnipotent but a lot less omniscient.
But code and fiction change far more dramatically than recipes in the
hands of master chefs or political systems in the hands of
constitution-writers, and the point I am trying to make here is that the
degree of change depends on the degree of control. In places,
programmers remake code entirely, or simply throw out now-useless code
whose function has been transferred elsewhere.
The second point is that change is proportional to both relative and
absolute intelligence. A programmer with a major insight (large
change in effective intelligence) may wind up throwing out a lot of
obsolete code. How much code gets thrown out is determined by the
absolute intelligence of the insight. Some of my insights into
adaptive code have resulted in hundreds of kilobytes, or on occasion an
entire program, being deleted or dragged into the "Obsolete" folder, never
to be used again. It's a matter of causal density. When you
teach your program to do for itself what you once did for it, gigantic
reams of widely distributed repetitive code can be collapsed into a few
small files. The ideal solution avoids all pattern.
In order to argue about life after Singularity, you must be able to
explain your argument to a blank-slate AI. That is, you must be able
to reconstruct it from scratch. Superintelligences don't get bored;
they can delegate boring tasks to algorithmic subprocesses. (Or at
least this appears to be true of a seed AI, even in its later
stages.) SIs would have no reason to alter things instead of
reconstructing them. The paradigm of alteration, though built into
the human brain, relies on limited resources, limited time, and the
requirement of manipulating objects instead of dictating their
structure. The paradigm of alteration is even built into my proposal
for a seed AI; manipulatory choices are simply atomic alterations.
The paradigm of incremental alteration is built into a Turing computer,
which alters one cell at a time. I daresay it is built into the
entire Universe, which requires that objects have sublight velocities
instead of teleporting (except in special cases). The only causal
discontinuity I can think of is state-vector reduction, which apparently
was put into the Universe for the sole purpose of disproving Occam's
Razor. (See Quarantine: If there was no "collapse" of the
wave-function, our Universe would still necessarily exist.)
But if manipulations are so cheap (and can be performed so rapidly)
that the multi-step transition between two objects of arbitrary properties
is also cheap and rapid, then from a cognitive standpoint the system has
moved to a realm of instantaneous transition rather than alteration.
(While this is a step in my argument, it also serves to illustrate the
conclusion. Fundamental architectures change drastically with the
introduction of superintelligence.) I don't know if this will apply
on the large scale, and I don't know if the small scale will appear that
way to the transhuman, but to us, small programs such as the
human race (which could fit into one corner of a non-quantum processor the
mass of a basketball) will appear to go through instantaneous
transitions. It is the inertialess mind. Properties and
components have no tenacity at all - the only reason an early invention
would appear in the final product is if it would have been invented in any
case.
Some foundations that will definitely be eliminated:
That's the Big Three. Once you zap
the Big Three, you've pretty much zapped everything. Life As We Know
It, our world's set of default assumptions, doesn't exist in a
vacuum. Defaults originate as observed norms, and norms are the
abstracted result of a vast network of causality. These causal
chains ultimately ground in evolution, the macroscopic laws of physics, or
game theory. Zap - and the entire causal network evaporates, from
the foundations upward. Even if you argue that there will only be
one minor change to one foundation, it will still ripple upwards,
amplifying as it goes. By the time even a minor change in the nature
of evolution reaches our level, it will still cause as much evaporation as
a complete lossage.
But perhaps we will cut loose from evolution. After all, we've
already evolved. Aren't we causally insulated from evolution, or at
least from evolutionary foundation loss? Yes, we are, in the sense I
intend... although there may be a dozen evolution-using cognitive
processes, ranging from But it does bring up one point that is almost absolutely certain to be
raised by skeptics. "Okay", they say, "we could reprogram
ourselves so totally that our personality evaporated. But why would
we wantto?" In other words, at their current level of
intelligence and their current goals, they would act to preserve their
personality. Assuming that this is permitted due to a Zone Barrier,
then the "core" Singularity would still be incomprehensible - there will
always be some of us who don't give a damn about our personalities - but
the nervous would wander around the fringes in Permutation City.
Safe, but not transhuman.
But if their intelligence increases, then the "problem" returns in full
force. Transhuman intelligence, or even slightly transhuman
intelligence, corrodes away the foundations. Consider human emotions
- would they survive in a mind where the exact probability of every
statement was instantly known? (If this is unacceptable, the case of
the "approximate probability of most statements" is not much
different.) Hope? Fear? And despite the subjective
sensation, what is pain when you know exactly how long it will
last, and when it will stop, and what should be done about it?
Emotions influence our thoughts, and our actions, because there isn't a
different section of our minds doing the same thing more
efficiently. (See And once the old fears disappear, so do the reasons for not
reprogramming the self - the reluctance to alter the personality.
Fear of change, mental inertia, is the only reason not to completely
redesign the self along superintelligent lines - although one can call it
a "survival imperative" (or something) to justify it. The old
priorities will collapse, even if the personality and architecture are
completely preserved and a few more neurons are pumped into the various
mental abilities. At that point, the self is rebuilt from the ground
up, in accordance with some External goal I don't guess at, and everything
will evaporate. Even if there is absolutely no need for efficiency,
so that the old code remains in Thus the Singularity itself, its internal aspects, is beyond our
current comprehension. Or at least, it seems to me that
Singularity-class intelligence, if either efficient, or dealing with
issues that require Singularity-class intelligence, will always be beyond
the comprehension of current-human-level intelligence, in the internal
processes if not necessarily the external goals. That's a lot of
caveats, but I happen to feel that all the caveats are of low probability
(not that it would matter, pragmatically, if the probabilities were high).
So I'll repeat: The Singularity is beyond our comprehension.