Nick Bostrom has certainly written the most comprehensive and profound book of the possible developments of AI and our future with it, that I have read so far. Bostrom, who is Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director, lists systematically the arguments for why we should expect an intelligence explosion, why AI will have significant advantages over humans, what are the possible dangers and how we may be able to deal with them, and what should be our strategies. In a sense this book is complete because it seems to consider every argument, every potential complication, and the probabilities of all conceivable scenarios, that one can think of today. For instance, Bostrom discusses at length how Whole Brain Emulation (WBE) may emerge, how it would compare to true AI, which one of the two is likely to appear first, and if it will be better for humanity if WBE appears first or true AI. He seems to think we should not promote WBE because of technology coupling: Research on WBE may lead to the development of neuromorphic AI, which is AI based on neuromorphic principles found in the human brain, but applied and integrated in a way that we will poorly understand. Because neuromorphic AI is most likely less well understood and can be less well controlled it is potentially more harmful than true AI. Thus, his policy recommendation is not to push for WBE.
This is but one example of the book’s thorough weighing of arguments, probability of scenarios and what strategies humanity should pursue as a consequence. Sometimes the book uses mathematical models and detailed scientific arguments drawn from neuroscience, computer science or philosophy. For instance, Bostrom models the intelligence explosion as a differential equation, relating the increase of intelligence to the optimization power and the recalcitrance (resistance and obstacles against further intelligence improvements). Since optimization power and recalcitrance both depend on the level of intelligence available at a given time, a differential equation can capture the ensuing dynamics and illustrate the explosive nature of the development.
I can impossibly give a fair account of the wealth of arguments and richness of material covered in the book, but for a good summary I can recommend a TED talk on AI risks given by Nick Bostrom in 2015 and a well written profile article about Bostrom in the New Yorker from 2015.
Although the book is very comprehensive I find it curious that the term “intelligence” is never really defined, which seems to lead to a rather limited view of what intelligence is or could be. The book devotes a full chapter to the question of what a super-intelligence might be. But he seems to take human intelligence as a given baseline without defining what exactly it is, and then describes how a super-intelligence may differ from human intelligence, by distinguishing three types:
> Speed superintelligence: A system that can do all a human intellect > can do but much faster. > ... > Collective superintelligence: A system composed of a large number of > smaller intellects such that the system's overall performance across > many very general domains vastly outstrips that of any current > cognitive system. > ... > Quality superintelligence: A system that is at least as fast as a > human mind and vastly qualitatively smarter.
Thus, Bostrom uses the human performance as a yardstick and does not define or describe the term intelligence in other terms such that someone who has never met humans would also understand what is meant. Perhaps Bostrom, and others writing about the topic, do not see a need to define human intelligence, because it is obviously the basis of all we can do and have accomplished as a species. So without knowing exactly how it works, we define it as the basic mechanism that facilities all the wonderful achievements of humanity from developing the scientific method to organizing ourselves in complex societies. This reference to the human example as base line allows Bostrom to discuss superintelligence without knowing exactly what intelligence is.
But this vagueness leads to implicit assumptions that limit the discourse. In talks about AI and superintelligence inevitably the brain of a human sapiens is shown and persons like Einstein, Newton, DeWitt or vanNeuman are mentioned as examples of very intelligent humans. This seems to suggest that the human brain is the location and source of human intelligence. However, if we ask who exactly has conceived of the scientific method, or who exactly has designed and developed intercontinental missiles, or who exactly has designed and implemented a complex society like the Roman Republic or the European Union, we cannot really point to any individual. Many individuals have contributed over the course of many generations but no single human brain has conceived of a master plan and implemented it. So if neither Newton nor Einstein nor deWitt has come up with the scientific method, who has? This and many other accomplishments that we as humans are rightly proud of, is obviously beyond the means of any individual human but it is the product of extended cooperation of many individuals. Why did this cooperation happen?
It seems that humans are endowed with a set of reaction and activity patterns that facilitate this cooperation. Consider your reaction if someone addresses you on the street: “Hi, what is the time?”. For any healthy person it is impossible not to react. You can consciously decide to ignore the question, but only after you have registered the other person, interpreted his or her intention in one way of the other, considered possible options and then made a decision for action. In contrast, we do not deal with many other sensory inputs, images of trees and mountains and cars and birds, sounds from the wind, airplanes, construction machines, etc. Al those inputs do normally not trigger a reaction, but if another person directly addresses us, an innate mechanism gets triggered and we have to deal with this request. This kind of mechanism is not unlike the repertoire of behavior that helps us finding food. If we have walked through the mountains for two days without eating anything, and we see something edible beside the path, we have no choice but to react to it. Again, if we consider it poisonous, we may decide not to eat it but only after an elaborated assessment and decision process. Similarly, if after two days walking not having seen a single person and we meet someone who addresses us, we have no choice but to react.
Similarly deeply rooted mechanisms in our brains assess other persons that we meet, their capabilities, their intentions, their position in social structure, their appreciation of oneself, etc. These mechanisms are triggered and executed during each interaction with other people and they form the basis how we interact with each other. Those and many other innate mechanisms, that sometimes manifest themselves as pattern-assessment or pattern-reaction processes, are such that at the group and society level we see larger patterns of cooperation emerge. These low level mechanisms sum up that over time and facilitate the joint, collaborative hunting for large mammals, the forming of social hierarchies, the organization into cities and states, the establishments of institutions like schools and companies, etc. Consequently, all of humanity’s accomplishments that we are proud of and the reason why homo sapiens dominates the planet are due to emergent behaviors, emerging from highly flexible but still innate interaction patterns. So one can argue, that the intelligence, that is the basis of many of humanities best accomplishments, is located at the group and society level, not at the level of individuals. The intelligence in individual brains certainly contributes powerfully and profoundly to the group level intelligence, but it is only half the story.
One could argue that the mechanisms of interactions are also controlled and operated by the brain, so it is there where the intelligence lies after all. While this view is not incorrect, it is also misleading. Inside the brain, there are billions of neurons interacting. Whatever intelligence springs from the brain is due to the neurons firing and interacting. Does it mean that all human intelligence is located in individual neurons? Intelligence is an emergent behavior that can only be understood at several levels. To understand humanity’s feats we need to appreciate all three levels involved: neurons, brains, groups.
Why is this view relevant in a discussion of AI? Because the intelligence of individual humans is radically different from the intelligence of groups. While the intelligence in individual humans is centralized, self-aware, and equipped with goals, the group-level intelligence is distributed, not-self-aware and without primary goals. The importance of this difference is important in many ways. For instance Bostrom discusses instrumental goals of a hypothetical AI. Instrumental goals are not end goals of the AI, but they necessarily become intermediate goals whatever the end goals are. Bostrom discusses five instrumental goals, among them is resource acquisition and goal-content integrity, which is the tendency and desire of the AI to keep the original end-goals unaltered. Human societies also tend to acquire all resources that are within reach, and one could argue that resource acquisition is indeed also an instrumental goal for distributed, emergent intelligences. However, goal-content integrity seems irrelevant for human societies because they have no end-goals to begin with.
This is just one example where the assumptions about the nature of the AI is critically important for the arguments and the conclusions. Although never explicitly expressed, it becomes apparent in the course of the book that the AI that Bostrom has in mind is a centralized, self-aware AI with clear end-goals. Even though sometimes he does talk about “collective superintelligence”, he still seems to have in mind a distributed implementation of a centrally coordinated AI with uniform goals and planning, rather than a set of independent, interacting actors with emergent intelligent behavior. The process by which the AI comes into being is a project in a company, at a university or run by a government, maybe not unlike the Manhatten project that developed the nuclear bomb. This basic assumption defines the scope of the book. Within this scope, the book does an excellent job in discussing scenarios and possible policies. But this basic assumption about the nature of intelligence is also a limitation of the book, because it seems to me that other forms of intelligence are as likely, or even likelier to emerge than the centralized AI that Bostrom presupposes.
(Aj September 2021)