A co-founder of OpenAI – and its chief scientist – dropped a peculiar phase during a meeting with new researchers in the summer of 2023, shocking the Silicon Valley establishment and AI enthusiasts alike.
“We’re definitely going to build a bunker before we release AGI,” said Ilya Sutskever, a man once seen as a visionary behind the revolutionary success of ChatGPT, and later, the central figure in one of the most dramatic boardroom battles in the tech industry.
More to read:
Researchers warn about our planet’s takeover by AI in less than a decade
Sutskever openly spoke to the audience about the plan to build a physical bunker for OpenAI insiders, a sort of shelter for the human developers of AGI (artificial general intelligence), which theoretically would surpass human intellect, in case AGI goes rogue, multiple attendees recollected in interviews with The Atlantic’s writer Karen Hao.
The latter is writing a book about the rise of AI, so he paid attention to the threats seen from inside the industry.
“Once we all get into the bunker — ” Sutskever reportedly began, prompting a baffled researcher to interject, “I’m sorry, the bunker?”
“We’re definitely going to build a bunker before we release AGI,” Sutskever repeated matter-of-factly. “Of course, it’s going to be optional whether you want to get into the bunker.”
To some in the room, according to Hao, it was surreal. To others, it was simply Sutskever being Sutskever — a man known to straddle the line between scientific awe and existential dread. While he had once focused almost entirely on advancing AI capabilities, by mid-2023 he was splitting his time between innovation and AI safety. Colleagues described him as both "boomer and doomer" — exhilarated by AGI’s promise, but terrified of its consequences.
More to read:
Humans may not survive Artificial Intelligence, says Israeli scientist
This fixation reached a critical juncture in November 2023, when Sutskever — deeply unsettled by how OpenAI’s leadership, particularly CEO Sam Altman, was steering the company — joined forces with other board members in a surprise attempt to oust Altman.
The coup was swift but short-lived. It lasted mere days before Altman returned, bolstered by overwhelming staff support. Sutskever, whose position had become untenable, soon departed the company himself.
More to read:
Anthropic CEO suggests granting AI basic workers’ rights
The episode laid bare the internal fractures within OpenAI — between those racing to build AGI and those worried about controlling it. It also raised deeper questions about what it means to wield such transformative technology, and who gets to decide when — or if — it’s released.
It’s unclear whether, for Sutskever, the bunker is a metaphor or a real necessity. However, it exposes a growing concern among AI architects: the future they’re building might require shelter from the very tools they’re unleashing.