Between the two, Hoffman worries more about the problemists. American farmers who’ve been through the GMO wars will appreciate his critique of the “precautionary principle.” In his discussion of attitudes toward technology generally, he cites examples from a variety of fields, agriculture included.
Hoffman doesn’t say “no regulation, ever.” He asks, though, that we appreciate that innovation is itself a form of regulation whereas strict adherence to the precautionary principle can stifle innovations that would improve a technology’s safety.
To illustrate the innovation-as-regulation notion, he cites the early, unregulated days of the automobile, when auto makers introduced — for competitive reasons — many safety features we take for granted. For example, countless wrists, arms and jaws were broken by people trying to crank-start cars until 1911, when Charles Kettering invented the electric starter.
The next year it was available on Cadillacs, helping establish that brand’s reputation for luxury. It eventually became standard equipment.
Even as the authors respond to AGI’s critics, they keep returning to all the wonderful things the technology will make possible. They see improvement in people’s lives in fields ranging from manufacturing to agriculture, from health care to education.
“What if every child on the planet suddenly has access to an AI tutor that is as smart as Leonardo da Vinci and as empathetic as Big Bird?”
Superagency is a well-informed, thought-provoking book. I’m especially intrigued by the authors’ theory that the key to acceptance is getting the technology into the hands of a large number and wide variety of people.
Using AI, something I’ve started to do fairly recently, has certainly changed my attitude. AI tools like Gemini and Perplexity are helping me greatly in my study of the Italian language. My opinion of AI has gone from neutral to somewhat positive.
The reason I’m not even more positive lies in the question Hoffman and Beato fail to answer: Just how serious is the risk of a Terminator scenario? And if the risk isn’t negligible, what’s the best way to meet it? Even accepting the innovation-is-regulation premise, you have to wonder if innovation alone could keep this risk at bay.
I suspect Hoffman might have a convincing answer. I wish he’d shared it with us.
Yuval Noah Harari, author of the book “Sapiens,” spoke for many in one of the book’s many marketing blurbs:
“Superagency is a fascinating and insightful book, providing humanity with a bright vision for the age of AI. I disagree with some of its main arguments, but I nevertheless hope they are right. Read it and judge for yourself.”
Urban Lehner can be reached at urbanize@gmail.com
(c) Copyright 2025 DTN, LLC. All rights reserved.