TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
Tech Culture

Bryan Cantrill on AI Doomerism: Intelligence Is Not Enough

AI doomers "have long since disappeared into a religion that masquerades as Bayesian analysis" the Oxide CTO tells TNS.
Dec 10th, 2023 6:00am by
Featued image for: Bryan Cantrill on AI Doomerism: Intelligence Is Not Enough
Oxide’s Bryan Cantrill speaking at Monktoberfest 2023.

When historians look back on our moment in time, they’ll see a species facing a new technology. They’ll see new concerns, with some people even voicing an existential fear — that AI could extinguish humanity.

But maybe they’ll also remember something else. That one man stood on a stage in Portland Maine, and defended humanity’s honor, as possessed of unique attributes that AI will never replicate.

That humanity, as William Faulkner once said, will not just endure, but prevail.

That defense came from Bryan Cantrill, co-founder and Chief Technology Officer of Oxide Computer Company. At the 11th annual “Monktoberfest” — a developer conference “to examine the intersection of social trends and technology,” Cantrill aired his own strong feelings against highly hypothetical “existential threat” scenarios. “My talk is not for the AI doomerists,” Cantrill told us in an email interview.

“They have long since disappeared into a religion that masquerades as Bayesian analysis.”

And in Portland, Maine, he told his audience that the talk was provoked by “a whole flotilla of internet garbage that I have been unspeakably trolled by…”

How AI Will Wipe Out Humankind

How exactly would AI wipe out humankind? Cantrill dismisses some of the commonly proposed scenarios as wildly unsubstantiated and even laughable. (“You’re not allowed to say that a computer program is just going to control nuclear weapons.”)

Okay, but what if an AI somehow developed a novel bioweapon? “I think that reflects some kind of misunderstanding of how complicated a bioweapon is.” How about if a super-intelligent AI developed novel molecular nanotechnology?

“I am embarrassed to say that I read an entire book on nanotechnology before I realized that none of it had been reduced to practice… This was all effectively hypothetical.”

“As my daughter is fond of saying, whenever this comes up about AI taking over the world: ‘It has no arms or legs.'”

Cantrill then offered a shorter reaction to what he sees as a far-fetched hypothetical: “Jesus Christ. Nanotechnology is back again.”

Cantrill enjoyed showing his skepticism about a humanity-extinguishing AI, noting that even as a hypothesis it raises “gazillions of questions.” For example, why would that be the AI’s motivation? And where does it get its means of production? “As my daughter is fond of saying, whenever this comes up about AI taking over the world: ‘It has no arms or legs.'”

Drawing a laugh from the audience, Cantrill elaborates. “The lack of arms and legs becomes really load-bearing when you want to kill all humans.”

And how exactly would AI confront the counter-threat of human resistance? “Honestly, it’s kind of fun to fantasize about…” Cantrill says. “Can you imagine if we were all united by the cause of fighting the computer program?” Consider for a moment the prospect of all humankind, focusing its sundry powers on thwarting a single malfunctioning piece of software.

“It would be awesome!”

Is Software a Nuclear Weapon?

One example of AI doomerism was a well-meaning person who’d reluctantly supported, as Cantrill described it, “pausing all AI — that AI is scary, and we must pause all AI research.”

In a September tweet Flo Crivello, the founder of the AI-assistant company Lindy, argued that “intelligence is the most powerful force in the world… and we are about to give a nuclear weapon to everyone on Earth without giving it much thought…”

Crivello also argued that “No substantial counter-argument has been offered to the existential risk concerns,” deriding AI supporters as “not serious people.”

First Cantrill took offense that the serious people in this scenario are the ones going onto Twitter to make tweets “equating a computer program with nuclear weapons.” And that this supposedly serious contingent has gone so far as to toss out their own assessments of our “probability of doom” — that is the complete annihilation of all humankind.

“Can we have a little more reverence for our shared ancestry?”

But Cantrill believes this “outlandish” and unsupported hypothesis could itself lead to scary scenarios. A pause in AI development, for example, would be “brazenly authoritarian. It has to be.” Cantrill begins by pointing out that even “restricting what a computer program can do is pretty scary, violating what many people view as natural rights.”

And further down that slippery slope, as one slide points out, “The accompanying rhetoric is often disturbingly violent.” Some who see an existential threat to humanity from AI can then justify actual acts of humanity-protecting violence.

As Cantrill sees it, arguing that there’s an existential threat to humanity leads to people saying “We should control GPUs. And those who would violate the international embargo on GPUs? Yes, we should bomb their data centers. In fact, we should preemptively strike their data centers.”

Cantrill mocks this as an overreaction that’s all “because of a computer program.” And if the world is in need of a “serious” counterargument to this side of the debate, Cantrill offers one himself.

“Please don’t bomb the data centers.”

What an AI Can’t Do

Cantrill had titled his talk “Intelligence is not enough: the humanity of engineering.”

Here the audience realizes they’re listening to the proud CTO of a company that just shipped its own dramatically redesigned server racks. “I want to focus on what it takes to actually do engineering… I actually do have a bunch of recent experience building something really big and really hard as an act of collective engineering…”

Sharing a tale from the real world, Cantrill put up a picture of their finished server, then told horror stories from the scariest dystopia of all: Production.

  • They’d spent weeks debugging a CPU that refused to come out of reset — only to discover that the problem was a bug in their supplier’s firmware.
  • Another week was spent on a network interface controller that also wouldn’t come out of reset. Again, their vendor had made a mistake — it involved the specifications for one of its crucial resistors.
  • There was even a time period they later called “data corruption week” — when corruption started sporadically appearing in their OS boot images. (A slide explains the mind-bogglingly obscure cause: their microprocessor “was speculatively loading through a stowaway mapping from an earlier boot.”) Cantrill says it was only a lone human who’d intuited where to look. “And it was their curiosity that led them to this burning coal fire underneath the surface.”

Importantly, the common thread for these bugs was “emergent” properties — things not actually designed into the parts, but emerging when they’re all combined together. “For every single one of those, there is no piece of documentation. In fact, for several of those, the documentation was actively incorrect. The documentation would mislead you… And the breakthrough was often something that shouldn’t work.

“Something that a hyper-intelligent superbeing would not suggest.”

Cantrill put up a slide saying “Intelligence alone does not solve problems like this,” presenting his team at Oxide as possessed of something uniquely human. “Our ability to solve these problems had nothing to do with our collective intelligence as a team…” he tells his audience. “We had to summon the elements of our character. Not our intelligence — our resilience.”

“Our teamwork. Our rigor. Our optimism.”

And Cantrill says he’s sure it’s the same for your (human) engineers…

He drives home what is the essential point of his talk. “These are human attributes.” When we’re hiring, we consider more than just intelligence — seeking collaboration and teamwork, and most of all, shared values. “This kind of infatuation with intelligence comes from people that honestly just don’t get outside enough.

“They need to do more things with their hands, like look after kids, go hiking… ”

A Profound Truth

Cantrill has arrived at what he called a profound truth: “Intelligence is great; it’s not the whole thing. There is a humanity.”

One slide clarifies that AI can still be useful to engineers, but lacks three crucial attributes: willpower, desire, and drive. “And we do a disservice to our own humanity when we pretend that they can engineer autonomously,” Cantrill says.

“They can’t. We humans can…”

While Cantrill sees the risk of human extinction as too small to worry about, he acknowledges there are real risks. But “Bad news,” Cantrill says. “It’s the risks you already know about… It’s the racism. It’s the economic dislocation. It’s the classes — it’s all the problems that we have been grappling with as humans, for our eternity.

“AI acts as a force multiplier on those problems, and we need to treat that really, really seriously. Because AI will be abused — already is being abused. And AI ethics is exceedingly important.”

There’s one silver lining, Cantrill notes. There are also laws, regulations, and entire regulatory regimes already in place around things like nuclear weapons, bioweapons research, and even self-driving cars. “Let us enforce them…” Cantrill says. “Take your fear and steer it into enforcing regulations.”

But this makes it even more important to push back on what he sees as overblown “AI doomerism.” As Cantrill put it in a recent blog post, “the fears of AI autonomously destroying humanity are worse than nonsense, because they distract us from the very real possibilities of how AI may be abused.”

In his talk, Cantrill even suggested people are secretly more comfortable with instead pondering an oversized dystopia where “we’re all going to be extinct anyway… So like, ‘We’re all going to be in the post-Singularity afterlife… We actually don’t care about this world.'”

“Some of us actually care about this planet and this life and this world. This is the world that we live in.

“And we should not let fear — unspecified, non-specific fear — prevent us from making this world better.”

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.