I first read Nick Bostrom’s Superintelligence a few years ago, before the current AI boom took off. Recently I revisited it and was struck by how thought-provoking it still is. If you are interested in the philosophical implications of AI, this book is essential reading. It explores existential risk, alignment, and the control problem of machine intelligence. Though it was published in 2013, its core arguments still hold up surprisingly well.

For me, the book’s greatest value is its function as a calibration tool. In today’s AI bubble, there is a strong tendency to talk as if superintelligence is almost here, simply because current systems can do things that would have seemed astonishing a few years ago. The book reminds you what superintelligence is actually supposed to mean. Bostrom helps re-establish the scale of the idea, and by that standard we are still very far from it.

Rereading it reminded me how inflated much of the current discussion around AI has become. Whatever is happening right now, it is still nowhere close to what Bostrom meant by superintelligence.