The AIs Are No Longer Science Fiction

A quarterly miscellany of essays by Richard Dooling and news about his upcoming books. Subscribe here.

The Sorcerer's Apprentices
If you are old enough, you may recall the 1940 Disney movie Fantasia, especially the cartoon of Dukas' symphonic poem, "The Sorcerer's Apprentice," starring Mickey Mouse and a host of magic brooms. The grizzled old wizard goes to bed for the night, and Mickey works a spell to animate some brooms, which quickly multiply and wreak havoc, until the sorcerer returns and sets things right.
When it comes to the sorcery of AI, the grizzled old wizard is 78-year-old Geoffrey Hinton, the British-Canadian computer scientist commonly called "the godfather of AI." He pioneered deep learning, worked for Google Brain, and in 2024 won the Nobel Prize in Physics for his work on artificial neural networks. In his Nobel banquet speech, Hinton warned of "the existential threat that will arise when we create digital beings that are more intelligent than ourselves:"
We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short-term profits, our safety will not be the top priority. We urgently need research on how to prevent these new beings from wanting to take control. They are no longer science fiction.
Mark Zuckerberg, Sam Altman, Elon Musk, and the other tech bros are all playing Mickey Mouse, giddy with power and bewitched by their own creations. And the brooms? Those are the millions of AI agents that experts like Hinton and former Google CEO Eric Schmidt warn will soon operate semi-autonomously, working together, possibly developing their own language.
The tech bros are breezy and gee-whiz about it. The pitch is always the same: create an entire app just by describing it, cure cancer, revolutionize the world. Hinton might agree, but he also warns of a 10-20% chance of human extinction in the next thirty years if AI is allowed to proceed apace without guardrails. Nobody solicits his opinions any more because he doesn't seem to care about stock options or national defense.
Dario Amodei, CEO Anthropic
The torrent of AI headlines comes thick and fast, because AI implicates so much: the labor market, the stock market bubble, the price of electricity, national defense, to name only a few. If the doomscrolling makes your eyes cross, and you would like to read one thoughtful long-form essay on the promise and peril of AI, I recommend "The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI" by Dario Amodei, the CEO of Anthropic, creator of the popular AI, Claude.
Like Eric Schmidt, Amodei assumes that AI agents will be semi-autonomous and will number in the millions. To focus our attention, Amodei asks:
Suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.... Suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this "country" is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.
I often don't read these articles because they describe Black Mirror episodes I don't want to see happen in real life. No hope for us hominids. No remedy. I credit Amodei, a guy who knows the business and also drops references to Black Mirror and 2001: A Space Odyssey, for describing plausible protections, mainly transparency and mandatory reporting. If an AI tries to blackmail a human by threatening to expose the human's adulterous affair, all the players should be notified of the event and the programming that led to its manifestation.
Now that corporations and governments are waking up to the promise and peril of AI, Anthropic and its CEO appear to be leading the push from within these companies for ethical, transparent use of their products. At the end of January, the Wall Street Journal reported on Anthropic's clash with the Pentagon over whether its technology would be used for domestic surveillance and autonomous lethal operations.
"Humanity is about to be handed almost unimaginable power," says Amodei, "and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."
Competition
Geoffrey Hinton quit working for Google in 2023, not because he thought Google had done anything wrong, but because he wanted to freely sound the alarm about the dangers of AI, without worrying that people would think he was criticizing the products of his employer.
But these warnings from insiders may amount to almost nothing, because technology has a way of living a life of its own. As Hinton put it, "The tech giants are locked in a competition that might be impossible to stop." If country A doesn't pursue AI, country B will. Corporation A announces AI capital expenditures in the hundreds of billions, because if it doesn't, corporation B will.
This is how we ended up with an atom bomb. As J. Robert Oppenheimer, the director of the Manhattan Project, put it: "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb." Instead of worrying that the Russians will develop an atom bomb before we do, nowadays we worry that the Chinese will develop AI-powered nuclear drone armies before we do.
"This is the trap," says Amodei. "AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all."
Nonusers of AI probably think of it as just a helpful chatbot expressing itself through turbo autocomplete. But, as Amodei put it in his essay:
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
2001: A Space Odyssey

For decades, any mention of artificial intelligence immediately brought to mind Stanley Kubrick's movie and, "Open the pod bay doors, HAL." Even Amodei alludes to it in his essay. But if AI is the Monolith 2.0, then the movie's opening is where it's at. Recall the apes at the waterhole in the "Dawn of Man" opening to 2001. An ape uses the long bone of a tapir skeleton as a war club, kills a rival ape with his new tool, and drives the rival tribe away from the water hole. What we don't see in Kubrick's movie is what happens the next day when both tribes of apes show up at the same waterhole, but now they all have bone war clubs.
Today's great apes are more likely to use AI to drive rival tribes away from land containing oil or critical minerals, but the lesson is that we forswear AI at our peril. We could end up being the only ape without a war club, and extinction soon ensues.
Which raises the question: Are we humans chauvinists when it comes to other species? Do we suppose that billions of years of evolution produced humans, and now what? Evolution stops? Do we imagine that the blind watchmaker of natural selection closes up shop and says, "Well, we sure can't do any better than human beings. Just look at their social media sites. These creatures are marvelous."
I think not. If intelligence is what separates man from beast, then it's fair to wonder how man measures up to AI, and whether humans may soon be regarded as the poultry or livestock of the new ruling species. Are we no longer the smartest monkey?
We may soon find out, because the difference between Disney's cartoon and our situation is the ending. In Fantasia, the sorcerer comes back and fixes everything. But Geoffrey Hinton is the sorcerer, and he's already warned us that he doesn't know how to stop the brooms.
The Acolyte: A Novel
I'm waiting on cover art for my next novel. I'll send in the next issue of this newsletter.