He’s King of the AI Boom. Why Do Former Colleagues Say He Can’t Be Trusted?
The question hung in the Oakland federal courtroom, sharp and simple. "Do you always tell the truth?" an attorney asked Sam Altman, the CEO of OpenAI and the undisputed king of the AI boom. "I believe I am a truthful person," Altman replied.
It was a carefully parsed answer, the kind lawyers coach you to give. But the people who know him best have already had their say. And what they've said, under oath and in secret memos, paints a very different picture. A picture of a man who, as one former colleague put it, "says one thing to one person and completely the opposite to another". A man whose own chief scientist compiled a 52-page dossier with a simple, devastating headline: "Sam exhibits a consistent pattern of lying".
This isn't just another tech-world squabble. When the person steering the most powerful AI company on the planet is accused by his closest collaborators of being a "sociopath" who "cannot be trusted," it stops being gossip and starts being a matter of public consequence. The OpenAI trial, sparked by Elon Musk's lawsuit, has thrown open the doors to a decade of secrets. And what's inside should make all of us pause.
The Courtroom Drama: A Tale of "Chaos" and "Deception"
The most poignant testimony came from Mira Murati, OpenAI's former chief technology officer. She wasn't a disgruntled outsider. She was so deeply woven into the company's fabric that she briefly served as its CEO after the board fired Altman in 2023. Yet her words in court were searing.
"My concern was about Sam saying one thing to one person and completely the opposite to another person," she testified. She went further, stating plainly that Altman was "creating chaos" and, at times, was deceptive with her and other executives. He was, in her telling, a leader who would pit executives against one another, undermining her own role as technology chief.
Here's the paradox that makes your head spin: even after describing this toxic environment, Murati said she still wanted him to stay as CEO. She feared the company would "completely blow up" without him. It’s a hostage-like dynamic where the company's fate seems terrifyingly intertwined with the very person destabilizing it.
The board members who ultimately tried to fire him shared this deep unease. Tasha McCauley, a former board member, testified that her "smaller interactions" with Altman gave her "real doubt" about his trustworthiness. She described a "culture of lying, a culture of deceit" that emanated from the CEO's office, making it impossible for the board to make informed decisions.
One concrete example? Altman told the board that three safety reviews of a ChatGPT variant had been completed. In reality, only one had been done. It's a small detail, but in the high-stakes world of AI safety, where the difference between "tested" and "untested" could one day be measured in real-world harm, it's a detail that matters immensely.
The Pre-History of Distrust: It Didn't Start at OpenAI
Perhaps the most chilling warning came from beyond the grave. Aaron Swartz, the brilliant programmer and internet activist who took his own life in 2013 while facing federal charges, was an early voice of alarm. Swartz, who had been in the same Y Combinator cohort as Altman, reportedly told friends before his death: "You need to understand, Sam can never be trusted. He's a sociopath. He is capable of anything".
Swartz's words now feel prophetic.
Years before OpenAI became a household name, Altman ran a location-sharing startup called Loopt. Senior employees there allegedly grew so concerned about his leadership that they urged the board to fire him, citing a lack of transparency. Later, at Y Combinator, where Altman became president, tensions simmered again. Paul Graham, the founder of the prestigious startup accelerator, reportedly confided to colleagues that "Sam had been lying to us all the time".
This isn't a single bad week or a misunderstanding. It's a trail. A pattern that stretches back over a decade, through multiple companies, echoing the same complaint: a leader who tells you exactly what you want to hear in the moment, and something entirely different to the person in the next room.
The Scientist's Secret Memo: What Ilya Sutskever Saw
Ilya Sutskever, OpenAI's co-founder and former chief scientist, is widely regarded as one of the most brilliant minds in artificial intelligence. He's not a bomb-thrower. So when he spent months compiling a 52-page secret memo about his CEO's behavior and sent it to board members as "disappearing messages" to avoid detection, the tech world paid attention.
The memo's headline item was blunt: "Sam exhibits a consistent pattern of lying, undermining his executives, and pitting his executives against one another". Sutskever also gathered approximately 70 pages of Slack messages, HR documents, and screenshots, many supplied by Murati herself, to build his case.
During his deposition, Sutskever was asked what action he believed was appropriate. His one-word answer: "Termination".
He also revealed that he had been quietly considering Altman's removal for "at least a year" before the board finally acted in November 2023. Why the secrecy? Because he was genuinely afraid. "If Altman had become aware of these discussions," Sutskever testified, "he would just find a way to make them disappear".
Think about that for a moment. The chief scientist of the world's most important AI company was so fearful of his own CEO that he resorted to spy-like tactics just to have an honest conversation with his board.
A Master Manipulator or a Pragmatic Leader?
Let's be fair for a moment. Altman's supporters, and there are many, argue that these accusations are a rehash of old events, driven by people with agendas. They point to his undeniable success: raising hundreds of billions of dollars, brokering massive infrastructure deals, and steering OpenAI to the center of the global economy.
But even some of his defenders acknowledge an uncomfortable truth about his skill set. According to numerous engineers interviewed in a New Yorker investigation, Altman lacks deep technical expertise. He has been known to mix up basic machine learning terms, and his real gift isn't coding, it's persuasion.
One former OpenAI researcher described how Altman operates: "He sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was". Another insider called it "Jedi mind tricks".
And then there's the most explosive label of all: sociopath. A board member directly told investigators that Altman is "unconstrained by truth," possessing two rare traits: "a strong desire to please people, to be liked in any given interaction," and "almost a sociopathic lack of concern for the consequences that may come from deceiving someone".
A senior Microsoft executive, a partner, not a rival, put it even more starkly: "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer".
What This All Means for the Future of AI
The core of OpenAI's founding mission was radical: to ensure that artificial general intelligence benefits "all of humanity". The company was deliberately structured as a nonprofit with a board empowered to prioritize safety over profit, even over the company's survival.
But the trial has laid bare how completely that structure has been hollowed out. When the board tried to exercise its authority and fire Altman, he was reinstated within five days after threatening to "hollow out" the company and after hundreds of employees threatened to resign. The message was clear: the CEO, not the board, holds the real power.
As former board member Helen Toner testified, the original mission was "deeply related" to safety. But as the company raced toward commercialization, "safety was not as prioritized". Rosie Campbell, a former safety researcher, testified that two safety teams were disbanded and that product launches sometimes bypassed proper review.
The question that now hangs over all of us is simple and deeply unsettling: if the people who know Sam Altman best say he cannot be trusted with the most powerful technology in human history, why is he still the one holding the keys?
The allegations against Sam Altman are not new. But the courtroom testimony from Mira Murati, Ilya Sutskever, and OpenAI's own board members has transformed whispered concerns into a matter of public record. For those of us watching from the outside, the trial offers a rare, unfiltered glimpse into the human dynamics behind the AI curtain. And what it reveals is that the greatest risk in artificial intelligence may not be the code, it may be the character of the person in charge.
Comments
Post a Comment