The threat of AI comes from humans placing too much trust in complex but fallible systems.
This article is excerpted from the October 2024 issue of The Critic. To get the full magazine, why not subscribe? We’re currently offering 5 issues for just £10.
Last year, chatbots writing half-baked poems and a series of apocalyptic declarations about AI thrust us into the civilized moment. From Rishi Sunak to Elon Musk, two main concerns have emerged from the mix of panic and hype. First, will my job be needed? And secondly, will it really end up destroying us all?
Probably not, says Neil D. Lawrence, the first DeepMind professor of machine learning at the University of Cambridge. Atomic Human is his grand attempt to not only explain what AI is, but to use it as a means to better understand human intelligence. His mission is semi-religious. “As machines cut away some of our human capabilities, we will be left with a ‘core of humanity,'” he writes. It is this atomic human that reveals the truth about our human psyche.
But what follows may not necessarily cause much excitement. Human intelligence is defined by its “embodiedness” and is severely limited by what it can communicate. We are “butterflies in diving suits” with “clumsy but beautiful” intelligence. As a result, we construct an “information topography”. This is essentially a shared culture built on our vulnerability that allows us to overcome problems and collaborate with each other.
Machines are good at reading and imitating parts of this complex framework. But not all of it. An intelligent machine removed from a specific context and the tasks assigned to it bears little resemblance to the human mind. The underlying idea is that “intelligence” cannot be isolated and reconstructed as an entity abstracted from the human body and its experiences.
So, should we call it “artificial intelligence”? Lawrence himself seems to be asking this question. Until 2013, this technology was called “machine intelligence.” Then it was discovered that computers could process information from images. Mark Zuckerberg and Google took the leap. However, a rebrand was planned. “Overnight, I became an AI expert,” Lawrence says warmly.
He wonders aloud whether the topic will gain as much attention as a conference on AI renamed the World Forum on Computers and Statistics for Humanity. This delightful cynicism flares throughout. When Zuckerberg bought the Machine Intelligence Institute, he says, “I think he really believed that his investment was going to eventually get us smarter people.”
All this raises another question. Has the introduction of a new technology ever been accompanied by such a poor understanding of what it actually is?This book is best at providing an accessible and interesting introduction to computer science. It’s a case of offering something rare: history.
The theoretical foundations of AI can be traced back to roughly three Enlightenment figures: Laplace, Leibniz, and Newton. With the vast amounts of data made possible by modern computing, conceptual models for understanding the universe have provided us with algorithms that increasingly better mimic human intelligence. To put it more simply: “The artificial intelligence we sell is just a combination of very large datasets and computers.”
Nick Bostrom, an AI giant
Lawrence’s best moments occur when these explanatory reductions are exercised on some of the popular characters and stories that have centered on AI. Twenty-odd pages in, two fairly large shots are fired at Nick Bostrom and Ray Kurzweil. These titans have greatly helped define the general trajectory of AI, leading to a “superintelligence” that can hand over the keys to civilization to computers. This is all “terrible,” Lawrence writes. Their problem is “confusing the concept of decision-making intelligence with the concept of an intelligent being.”
If anything could define the book’s subsequent expansive narrative, it would be to convey its intelligent presence in a variety of relatable scenarios. Discussions range from the experiences of trapped patients writing books to World War II military operations (you don’t want a computer to decide when the D-Day landings start, Lawrence says) ).
But too often the text is meandering and chaotic. From William Blake to George Orwell to Jeff Bezos, each chapter is filled with analogies, personal stories, and historical curiosities. The book is Lawrence’s attempt to download his brain onto the page, but the kaleidoscope of personal anecdotes and historical references too often serves to distract rather than illuminate.
I think that responsibility lies not only with the author but also with the publisher. The current trend is for nonfiction books to explicitly take readers on a journey, with the assumption that unless they are given a nice bedtime story, they will lose interest in the discussion.
But being trapped within such a narrative forces the book to take detours into some rather dumb and unnecessary observations. “To resolve the confusion of boyhood,” Lawrence writes about understanding the conflicted relationship between his father and older brother. From a book by Douglas Adams. “Really?
This is a shame, as a very important warning is hidden across 380 pages. Andrew Orlowski defined AI as a religious rather than a technological moment. Lawrence seems to have similar thoughts. “When humans feel incapable of making decisions, we are tempted to defer them to what we believe to be omniscient beings.” As a silent dissenter from within, you can create a more focused , I can’t help but feel that a fascinating, even controversial book has been lost.
One of the central provocations of this book is that we are heading towards a new Horizons scandal. The real threat from AI comes not from Terminator-style robots trying to wipe out humanity, but from humans placing too much trust in systems that are inherently complex but fallible. Many of the systems we are building, Lawrence argues, are not even understood by their creators (though here he relies heavily on Russian interference in the 2016 election through Facebook). (The evidence for this is scant.)
This discussion is both philosophical and political. There’s a short exhortation from Popper about “open societies” and the usual warnings against the arrogance of the “tech fraternity.” He claims that Sam Altman and OpenAI are replacing “great historical figures” with “great computers.” But behind this debate lies an unanswered tension. Which is more dangerous: the technology itself or the potential of our unwavering faith in it?
When read selectively, this is a thoughtful and serious book. Paragraphs and sentences are enough to tell the truth you need about a world drowning in AI predictions and noise. I have a feeling we’ll be hearing from the author again – if not a shorter and more concise explanation of this technology and its limitations, but perhaps the next AI-inspired post office scandal is inevitable. When, perhaps as a witness in a government investigation. .