Superintelligence – Book Summary

Paths, Dangers, Strategies

Nick Bostrom
Rating: 7.8

“Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”
-The Economist

The Prospect of Superintelligence

At Dartmouth College in the summer of 1956, a group of scientists sat down to chart a new course for humankind. They began with the notion that machines could replicate aspects of human intelligence. Their effort evolved in fits and starts. Rule-based programs, or “expert systems,” flourished in the 1980s, and the promise of artificial intelligence (AI) seemed bright. Then progress waned and funding receded. In the 1990s, with the advent of “genetic algorithms” and “neural networks,” the idea took off again.

“There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

One measure of the power of AI is in how well specifically designed machines play games like chess, bridge, poker Scrabble, Jeopardy, and Go. A machine with effective algorithms will prevail over the best human Go player in about a decade. From games, AI’s applications extend to hearing aids, speech and face recognition, navigation, diagnostics, scheduling, inventory management, and an expanding class of industrial robots.

“An artificial intelligence can be far less human-like in its motivations than a green scaly alien.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Despite AI’s growing popularity and uses, signs of its limitations are emerging. In the “Flash Crash” of 2010, algorithmic traders inadvertently created a downward spiral that cost the market a trillion dollars in seconds. Yet the same technology that caused the crisis also helped solve it.

“Some little idiot is bound to press the ignite button just to see what happens.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

AI’s Evolution

Will AI’s arc follow the evolving pattern of human intelligence? Can AI replicate the mind’s evolution? Actually, AI’s own evolution might follow several paths. British computer scientist Alan Turing believed that scientists one day would be able to design a program that would teach itself. This led to his idea of a “child machine,” a “seed AI” that would create new versions of itself. Scientists speculate that such a process of “recursive self-improvement” could one day lead to an “intelligence explosion” resulting in “superintelligence,” a radically different kind of intelligence. This raises a monumental question: Would such superintelligence reflect human emotions – like love or hate? And if so, how?

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

A second path might lead to intelligent software based on the architecture of a particular human brain. This differs from AI and speculatively involves taking a dead person’s brain and creating a digital reproduction of that person’s “original intellect, with memory and personality intact.” This “barefaced plagiarism” is called “uploading,” or “whole brain emulation” (WBE). It would take years, perhaps until the middle of this century, to develop the necessary enabling technologies.

“Go-playing programs have been improving at a rate of about 1 dan/year. If this rate of improvement continues, they might beat the human world champion in about a decade.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

A third path might follow genetic engineering, which could lead to a population of genetically enhanced individuals who together make up a “collective superintelligence.” This scenario might include cyborgs and systems like an “intelligent web.”

Superintelligence could assume three forms. “Speed superintelligence” could replicate human intelligence but function much more quickly. “Collective superintelligence” would be a group of subsystems that could independently solve discrete problems within a large project – for example, the development of the space shuttle. The third is more vaguely defined as “quality superintelligence.” It refers to an AI of such high quality that it is as superior to human intelligence as human intelligence is to that of dolphins or chimpanzees. As for how fast science could create a new intelligence, the answer depends on “optimization power and system recalcitrance,” or willingness to obey.

“The gap between a dumb and a clever person may appear large from an anthropocentric perspective, yet in a less parochial view the two have nearly indistinguishable minds.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

On the Road to Dominance

As science’s exploration of superintelligence progresses, more questions arise. For instance, how might different efforts to create superintelligence compete with each other? An ancillary to that question is what the consequences might be if a front-running technology gained a “decisive strategic advantage.” An artificial intelligence project might achieve such an advantage by reinforcing its dominance in a way human organizations cannot. For example, it could undermine its competitors while protecting itself from international scrutiny. Conceivably, this advantage might enable the nation owning the superintelligent agent to become a “singleton,” an all-powerful country or domineering organization, like a UN with nuclear supremacy. Or, the superintelligent agent itself could become a singleton on its own.

“There is a…pivot point at which a strategy that has previously worked excellently suddenly starts to backfire. We may call the phenomenon the treacherous turn.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Such an ascendance might commence benignly, say starting as a seed AI with the ability to recursively self-improve. That might lead to an “intelligence explosion.” This superintelligence – ever able to enhance itself and get the resources it needs – might decide to conceal its abilities and goals from its human controllers. At the opportune moment, having acquired a sufficient amount of money and power as well as a decisive strategic advantage, the agent could shed secrecy and emerge as a singleton – benevolent or not.

“Nature might be a great experimentalist, but one who would never pass muster with an ethics review board – contravening the Helsinki Declaration and every norm of moral decency, left, right, and center.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Orthogonality

Bear in mind that the character of AI and, by extension, of superintelligence, is not expressly human. Fantasies about humanized AI are misleading. Though perhaps counterintuitive, the thesis of orthogonality holds that levels of intelligence do not correlate with final goals. More intelligence doesn’t necessarily imply more shared or common goals among different AIs.

“Why learn arithmetic when you can send your numerical reasoning task to Gauss-Modules Inc.? Why be articulate when you can hire Coleridge Conversations to put your thoughts into words?”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Superintelligence’s prime motivation might not even include self-preservation; that might not even be it’s final goal. However, an AI’s motivation would certainly include certain “instrumental” goals, like moving toward “technological perfection” and gathering as many resources – like computational capacity, power and networks – as possible. In the realm of extraterrestrial exploration and colonization, AIs might share a common instrumental goal of using “von Neumann probes,” which are self-replicating rockets.

“A salient initial question is whether these working machine minds are owned as capital (slaves) or are hired as free-wage laborers.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Beyond questions of motivation, people tend to assume that a “friendly” AI – as it appears in “the sandbox” during its early developmental stage – would remain friendly. In fact, science has no assurance that an AI with self-preservation as its prime driver might not take a “treacherous turn.”

“Unplugging an AI from the Internet does not ensure safety if there are one or more humans serving as the system’s gatekeepers and remaining in communication with it.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Science already considers strategies for countering these dangers. The obvious answer is to curb the capabilities of superintelligence. Options include “boxing” or severely limiting AI’s environment, building in incentives, or using “stunting,” which would undermine an AI agent’s growth capacity. Another option calls for setting up “tripwires” triggered by changes in the AI’s behavior, ability or internal processes. Science can shape AI’s motivation by devising rules that define which values and behaviors are inappropriate. They might also shape its motivation by limiting its goals or working with systems that already include entrenched positive human values.

AI Architecture and Scenarios

In contemplating the specter of superintelligence gone berserk, some suggest reducing AI size, scope and capability. For example, scientists might base an early-term AI on nothing more than a question-and-answer system, an “oracle.” Programmers would have to take great care to clarify their original intentions so the AI functioned as desired.

“At best, the evolution of intelligent life places an upper bound on the intrinsic difficulty of designing intelligence.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Other options include various kinds of “genies.” In effect, these are programs with a built-in “super butler” persona that obeys commands only one at a time and waits in between each order for the next. In the worst case, a genie might adopt a previous command as its final goal and resist any attempt to modify it. Programmers can counter that risk by incorporating a “preview” function allowing for human consideration before the next command. With “tool-AIs,” programmers fashion a mere tool, with a passive rather than an active nature – instead of building an “agent” whose identity includes a will or ambition of its own.

If we knew how to solve the value-loading problem, we would confront…the problem of deciding which values to load. What…would we want the superintelligence to want?

These concerns emerge in an environment containing a sole superintelligent agent. Imagine a scenario in which several agents, quasi-singletons, compete and interact. That might mitigate threats, as did the policy of mutually assured destruction during the Cold War.

To explore different scenarios of a world after the transition to superintelligence, compare the way that so many new technologies affected the horse. Plows and carriages enhanced the horse’s capabilities, but the automobile almost completely replaced it. Horse populations greatly declined. Could humans travel that arc in the face of superintelligence? People do possess capital, property and political power, but in a society dominated by superintelligence, those advantages might wither away.

“An important question is whether national or international authorities will see an intelligence explosion coming.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Despite the enormous wealth that AI can generate for business, disparities in wealth and knowledge could lead to a “Malthusian trap,” in which population growth outpaces the basic means of supporting life – food, space, work, clean air and potable water.

Moral Character

Relentless doubt about the efficacy of AI always comes back to its motivation and how to refine it. The question remains: how can science create an AI with “final goals” which align with the values that foster peaceful coexistence between man and machine. Scientists might approach this process of “value loading” several ways. One would be simply to write values into an AI’s program. That evokes the riddle of how to translate the incomprehensible, sometimes ambiguous, nature of truths and principles into code. Programmers might trust in the virtues of human evolution and try to replicate that process in a software program, perhaps with search algorithms. Or coders could look to learning algorithms and “value accretion.”

“Since world GDP would soar following an intelligence explosion…it follows that the total income from capital would increase enormously.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

If whole brain emulation is the path to machine intelligence, then scientists conceivably might shape motivation using digital protocols. In another scenario, programmers might develop and harness a number of emulations using social controls. They could conduct value-loading experiments on “subagents” before applying them more generally.

“AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’ – that, somehow, is much harder!”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Science possesses practical strategies for equipping AI with the rudiments of a moral character. These are not necessarily the values of a human character, but rather are a unique creation of the superintelligent agent itself. First would be the cognitive power of the superintelligence. Fomenting its moral character calls for adopting “the principle of epistemic deference,” which states that humans should defer to machine intelligence when possible. An ever-evolving AI is certain to be superior to humans at determining what is true.

As for the expression of an all-encompassing value, consider AI theorist Eliezer Yudkowsky’s notion of a final goal – what he calls the “Coherent Extrapolated Volition.” Think of it as unification theory of human potential – humanity’s wish to become “all we could ever possibly be or hope to be, expressed exactly as we’d like.” Yudkowsky suggests that such a guiding value would honor the extrapolated volition of the many, not of the elite or the few.

“Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Deadline Now

Within the panorama of possibilities for superintelligence, the most practical – and, perhaps, also the safest – alternative may be whole brain emulation, not AI, although some scientists dispute that notion.

Whole brain emulation has three advantages: Because the technology ties so closely to the human brain, people understand its workings. By the same token, human motivations are part of the emulation. And WBE development requires a “slower takeoff,” which allows for more complete control.

“Human working memory is able to hold no more than some four or five chunks of information at any given time.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Whichever path humanity chooses, a true intelligence explosion is probably decades away. Nevertheless, be aware that people react to AI like children who discover unexploded ordnance. In this instance, even experts don’t know what they’re looking at. The oncoming and unstoppable intelligence explosion carries profound “existential risk.” As technology slowly evolves to make this explosion a reality, humans must carefully consider all the philosophical and moral ramifications now – prior to the appearance of superintelligent AI. Once AI appears, it may be too late for contemplation.

“The orthogonality thesis Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.”

– Superintelligence by Nick Bostrom: Book Summary by Make Me Read

Check out Make Me Read for summary of the best business books.
Don’t forget to check us out of FacebookLinkedInInstagramTwitter, and Medium.

Similar Posts

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *