Books - Read and Enjoyed

Our Final Invention

Artifical Intelligence and the End of the Human Era

Thomas Dunne Books, 2013

James Barrat

Superintelligence

Paths, Dangers, Strategies

Oxford Unioversity Press, 2014

Nick Bostrom

The Coevolution

The Entwined Futures of Humans and Machines

MIT Press, 2020

Edward Ashford Lee

Having read these three books in short succession, I thought it is useful to consider their scope and conclusion together.

Toby Ord ranks the risk for humanity from, what he calls unaligned artificial intelligence, as the highest among the existential risks that he surveys in his 2020 book The Precipice. In his words:

In my view, the greatest risk to humanity’ s potential in
the next hundred years comes from unaligned artificial
intelligence, which I put at one in ten.

(The Precipice, page 203)

He estimates the risk that AI will cause our extinction or at least destroy our long term potential as a civilization within the next hundred years to be 10%. The next biggest risks he sees in an engineered pandemic (3%) and another, unforeseen anthropogenic risk (3%). The risk that climate change jeopardizes our survival, is around 0.1% in Ord’s assessment. He is not alone to worry about AI. A number of scientists and public figures from Stephen Hawking to Elon Musk have raised concerns, and several books and many articles have been written on the subject.

The three books reviewed here take different approaches and draw partially overlapping, partially contradicting conclusions. James Barrat relays the views of scientists that have studied and researched the topic for years and interviewed many of them. Nick Bostrom’s book is a systematic, comprehensive study, carefully discussing and assessing all relevant aspects of AI, even using differential equations to model the process of intelligence explosion. Lee doubts that machine can ever be truly like humans because humans are analog while machines are digital and he posits that this difference may matter. He is much more optimistic, seeing humans continuously coevolving together with the machines towards a common future.

The main concern is about the emergence of Artificial Superintelligence (ASI), not so much about artificial human-level intelligence or narrow AI, which excels only in a narrow field of expertise like chess playing, Jeopardy or car driving. ASI is able to reason about all aspects of the world at large with an intelligence that exceeds human-level intelligence by orders of magnitude.

While the three books reviewed here approach the topic in different ways and reach partly contradicting conclusions, it is curious that none of them puts much effort in making clear what they mean with terms that are absolutely central to the questions raised: what exactly is intelligence? who is the agent that is intelligent? And what are the human values that should be preserved?


What is intelligence?

The closest James Barrat comes to defining intelligence is on page 25:

For the time being it’s enough to say that by general intelligence we mean the ability to solve problems, learn, and take effective, human- like action, in a variate of environments.

If we take away the word human from this definition, it basically applies to any animal, from single cell amoebae to insects, fish and mammals. On closer inspection it even applies to plants since grass, algae and trees also solve problems of nutrition supply and procreation, they learn and adapt to their environment, and take actions to react to changes of light, weather and seasons, and to invasions of hostile insects and microbes. So every living creature fits the bill. Adding the word human severely restricts the class, but it is not useful because human-like is not defined or explained and it seems to be anybody’s guess what human-like intelligence might be. This is particularly unhelpful because an AI will most certainly not be human-like.

Bostrom offers no different or more illuminating definition of the term, as discussed in the review of his book.

Edward Lee does not define intelligence, or human intelligence, either, but he discusses in detail several aspects that he seems to consider core aspects of human intelligence: Free will, consciousness, an analog body. Lee is not very explicit if these concepts are essential elements of intelligence or what else is required, but he expresses doubts that digital machines can acquire or have these capabilities, because of their digitalness. Because of this vagueness and the fact that consciousness and free will are neither defined nor pinned down in explicit terms, Lee also leaves basically open the question what intelligence is.


Who is intelligent?

The question who is the bearer of intelligence is even less explicitly discussed. All three books mention in passing that the ASI could be distributed or a “cloud service”, whatever that is, and at least Bostrom discusses at length the copying, duplication, uploading and downloading of an AI, but none of the books tries to localize or delineate the bearer and agent of intelligence. Why is it relevant? Because it is not at all obvious and it is important to know who to deal with.

Part of the AI may be confined in hardware devices or robotic bodies, but most expressions of intelligence will be distributed over many hardware devices on server farms in the vastness of the internet. Does it have to be embodied and able to interact with the physical environment, as some argue, or can it be pure software just living in cyberspace? If AI is vested in software, what properties distinguishes AI software from ordinary software? Software is not easily delimitable, since it can dynamically recruit new software to do a task or to obtain some information. It can grow and shrink and transform itself. How can we assign an identity to software? This is relevant because only software with some specific identity and immutable goals qualifies as an ASI, as e.g. Bostrom elaborates at length. But neither he nor the other authors discuss where the ASI may be located and what kind of identity it may assume.


What are human values?

The third curious omission of all three books is the definition and discussion of human values. Barrat and Bostrom are explicit that an ASI may not be interested in or preserve human values, but they do not spell out what exactly those values are and why they should be preserved. Bostrom at some points mentions that

[h]umans value music, humor, romance, art, play, dance, conversation, philosophy, literature, adventure, discovery, food and drink, friendship, parenting, sport, nature, tradition, and spirituality, among many other things

suggesting that these are part of the human values that are endangered when an ASI rules the world. But they are more like cultural habits that many humans enjoy rather than values independent of humans. If humans go extinct, there seems to be no specific benefit that these habits are still entertained by some ASI. It seems to me that the main, or only, value that at least both Barrat and Bostrom have in mind, is the survival of humans in a form that these humans can still have a good life, i.e. they are not slaves and enjoy some leisure time when they can freely interact with other humans. It is understandable that humans want humanity to survive; that desire is built into our genes. But beyond survival there is really no value that is worthy to survive even after humanity’s extinction. I think it would have helped and directed the discussion in Barrat’s and Bostrom’s books if this had been spelled out explicitly, because then the discussion could have more productively focused on human survival.

Lee does not discuss human values worth preserving because he assumes that human culture will evolve, essentially absorbing AI to form a combined culture that includes both humans and AI. Presumably all the cultural expressions of today will change; and they will change with the consent of humans.


Scope of the books

The lack of clarity regarding what intelligence might be and who is the bearer of intelligence makes all three books limited to scenarios where AI is like humans only much faster and smarter. If superintelligence of the future is very different, these books will not be much of a guide. This is particularly relevant to Barrat’s and Bostrom’s since they discuss at length effort how to prevent ASI from becoming hostile. If we have the wrong understanding of what the ASI is, we will have little chance to influence its development and properties.


Conclusions

Barrat and Bostrom are very concerned about a future ASI, and their concern does not become invalid just because they have not considered all possibilities. Quite on the contrary, we should be even more concerned because the uncertainty is so much bigger than what Barrat and Bostrom expect. We should double our efforts to study all those scenarios that we have not yet thought about.

Lee’s conclusion is much more relaxed, as he assumes that our culture will simply continue to evolve, but in the future substantially enhanced with AI. However, it has not become clear to me what his optimism is based on. To start with, coevolution can only work if both partners evolve at similar speed. But if one species, the AI, evolves a thousand times faster, coevolution breaks down quickly and the AI becomes the dominant part. This is exactly the core of I.J. Good’s argument for the intelligence explosion which is the basic assumption for Barrat and Bostrom. Lee does not discuss the speed of the coevolution and brings no arguments, why the evolution of human culture and AI should develop in a lock-step manner. So, although coevolution has certainly shaped the last century and is relevant for the discussion of a future AI, it seems unlikely that harmonic coevolution will continue un-disrupted.

While the future remains uncertain and even the big trends are hard to predict, we have plenty of reason to study possible trajectories of AI development.

(AJ October 2021)