“A Morale Boost in Some Ways”, Sam Altman Reveals What Really Happened When Elon Musk Left OpenAI
There’s a moment in every founder’s journey when they realize: the person they once admired most might actually be holding everyone back.
For Sam Altman, that moment crystallized in an Oakland federal courtroom this week, under oath, with billions of dollars and the future of artificial intelligence hanging in the balance. When asked how Elon Musk’s departure from OpenAI’s board in 2018 affected the company, Altman didn’t hesitate. Musk’s exit, he said, was “a morale boost in some ways.”
It’s the kind of soundbite that’s almost too good, a polite, devastating jab delivered in Altman’s characteristic measured tone. But buried beneath the headline is something far more interesting than Silicon Valley gossip: a case study in how leadership style can make or break an organization’s culture, and a window into one of the most consequential feuds in tech history.
Let’s unpack what Altman actually said, why it matters, and what it reveals about the two men fighting over the soul of artificial intelligence.
What Sam Altman Actually Said in Court
The Chainsaw Management Style Musk Brought to OpenAI
Altman’s testimony on Tuesday painted a vivid picture of Musk’s approach to running a research organization, and it wasn’t subtle. According to Altman, Musk demanded that OpenAI president Greg Brockman and former chief scientist Ilya Sutskever rank researchers by their accomplishments and then, as Altman put it, “take a chainsaw through a bunch.”
Think about that for a second. In a research lab, a place where breakthroughs can take years of quiet, iterative work, Musk wanted performance rankings. Constant accountability. Short-term results. It’s the management equivalent of digging up a seedling every week to check if the roots are growing.
Altman acknowledged that this was the management philosophy Musk is famous for, the same “hardcore” approach he brought to Tesla, SpaceX, and later Twitter (now X). But here’s the key insight from Altman’s testimony: what works on a factory floor or a social media platform does not translate to a basic research environment.
“I Don’t Think Mr. Musk Understood How to Run a Good Research Lab”
This was the line that landed hardest in the courtroom. When his lawyer, William Savitt, asked directly about Musk’s impact on morale, Altman responded:
“I don’t think Mr. Musk understood how to run a good research lab. For a research lab where people need, sort of, psychological safety and long periods of time to pursue an idea, this idea that you constantly have to show your results, and if they’re not good enough on a short period, you’re going to get fired. That really didn’t work for the kind of research we went on to successfully do.”
This is more than courtroom rhetoric. Altman is articulating a fundamental tension in knowledge work: you can’t pressure-cook creativity. The very researchers Musk was “demotivating”, as Altman put it, were the ones who would later build the foundations of ChatGPT. Musk’s approach didn’t just upset people; it actively undermined the research process.
It’s worth pausing here to note something: Altman’s not just talking about a management philosophy. He’s describing a feeling. The relief that came when Musk left wasn’t about personal dislike, it was about suddenly having the space to think. “Staff members realized they didn’t have to work this way anymore,” Altman testified. That’s more than a morale boost. That’s the removal of a psychological weight.
The “Hair-Raising” Moment That Revealed Musk’s True Intentions
Altman also recounted a pivotal moment early in OpenAI’s history that he described as “hair-raising.” When several co-founders asked Musk what would happen to his controlling stake in OpenAI if he died, Musk responded: “Control of OpenAI should pass to my children.”
Let that sink in. A company founded to ensure artificial general intelligence benefits all of humanity — and Musk was casually suggesting it should become a family inheritance. Altman testified that this exchange made him “extremely uncomfortable.” It also crystallized the core disagreement that would eventually tear the partnership apart: Musk wanted total control, and the other co-founders believed that was antithetical to the entire mission.
The Backstory, How Musk and Altman Went from Co-Founders to Courtroom Rivals
2015–2018: The Founding Vision, and the First Cracks
OpenAI was founded in 2015 as a nonprofit with a disarmingly idealistic mission: build artificial general intelligence that benefits all of humanity, not just a handful of corporations. Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, and others came together around a shared fear that Google’s DeepMind might achieve AGI first, and that concentrated power over such a technology would be catastrophic.
Musk invested at least $38 million (and possibly up to $50 million, depending on which source you reference) to get OpenAI off the ground. For a while, the partnership worked. But tensions were brewing almost immediately over control, structure, and strategy.
By 2017, Musk was pushing hard for a for-profit structure, with himself holding majority control. The board, including Altman, rejected this. What followed is the stuff of Silicon Valley legend: according to co-founder Greg Brockman, when Musk was told he wouldn’t get majority control, he “stood up and stormed around the table,” dramatically declared “I decline,” seized a painting of a Tesla that had been gifted to him, and stormed out of the room.
February 2018: Musk Walks Away (or Was Pushed?)
The official line in 2018 was that Musk left OpenAI’s board to avoid a conflict of interest with Tesla’s AI work. But the reality now emerging in court is considerably messier. Altman testified that after his bid for control failed, Musk “lost confidence” that OpenAI could succeed and, in a pattern that would repeat itself, decided that if he couldn’t control it, he’d compete with it.
Musk’s resignation speech to employees didn’t help. According to testimony, he told the team he saw no path forward for OpenAI and was going to focus on AGI at Tesla. During a Q&A, he added that he wasn’t going to work on safety, just on catching up with DeepMind. “That generated a strong, negative reaction,” according to reports.
2024–2026: Lawsuits, xAI, and an Escalating War
What began as a quiet separation has become an all-out legal war. Musk launched his own AI company, xAI, in 2023. In 2024, he sued OpenAI and Microsoft, alleging the company had abandoned its nonprofit mission. The suit seeks $180 billion in damages, demands OpenAI revert to nonprofit status, and calls for Altman and Brockman to be removed from their roles.
The trial has featured testimony from some of the biggest names in tech, Microsoft CEO Satya Nadella, former OpenAI CTO Mira Murati, and board members past and present. It’s a courtroom drama that’s pulled back the curtain on the messy, human, ego-driven reality behind the polished promises of AI safety.
What This Reveals About Leadership, Research Culture, and Startup Psychology
Why “Chainsaw Management” Fails in Research Environments
Musk’s approach, rapid results, constant evaluation, aggressive cuts, has been effective in some contexts. Tesla ships cars. SpaceX lands rockets. But research labs operate on fundamentally different timelines. As Altman put it, researchers need “psychological safety and long periods of time to pursue an idea.”
This aligns with what we know from organizational psychology. Google’s famous Project Aristotle study found that psychological safety, the belief that you won’t be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes, was the number one predictor of high-performing teams. Musk’s “chainsaw” method is essentially the opposite: an environment of fear where every contribution is constantly being judged against short-term metrics.
Psychological Safety, the Invisible Engine of Innovation
Altman’s testimony surfaces something that’s often missing from Silicon Valley’s “move fast and break things” mythology: creativity requires room to fail. When Musk ranked researchers by their accomplishments, he wasn’t just creating paperwork, he was signaling that the process of exploration wasn’t valued. Only outputs mattered. And in AI research, where the path to a breakthrough is often winding and unpredictable, that’s a recipe for killing exactly the kind of work that leads to breakthroughs.
The Paradox of the “Toxic but Successful” Founder
Some recent research from Macquarie University has explored the “toxic but successful” founder phenomenon, and Musk is arguably the archetype. The study suggests that high-pressure start-up culture can’t make up for bad leadership, and that the “dark triad” traits often found in founders, narcissism, Machiavellianism, psychopathy, can create short-term results at the cost of long-term organizational health.
Altman’s testimony is essentially a real-world illustration of this dynamic. Musk’s intensity probably did push the company forward in some ways. But it also demotivated key researchers and created an atmosphere where people were relieved when he left. The moral of the story isn’t that Musk is a villain, it’s that leadership style needs to match the work being done.
The Bigger Picture, What the Musk-Altman Trial Means for the AI Industry
Non-Profit vs. For-Profit, the Philosophical Battle
At its heart, this trial is about a question that will shape the next decade of AI: should the companies building the most powerful technology in human history be driven by profit, or by a mission to benefit humanity?
Musk says OpenAI betrayed its founding promise by becoming a for-profit entity worth potentially trillions. Altman says the shift was necessary to raise the billions needed to compete. Both sides have legitimate arguments, and the judge’s ruling could fundamentally reshape how AI companies are structured and governed.
What’s Actually at Stake ($180 Billion and the Future of AGI)
The numbers are staggering. Musk is seeking $180 billion in damages. OpenAI is reportedly planning an IPO that could value the company at over a trillion dollars. The trial’s outcome could decide who sits on OpenAI’s board, whether the company can go public, and, most importantly, who gets to control the development of artificial general intelligence.
But beyond the dollars and board seats, there’s a deeper question. Altman’s testimony reveals a fundamental disagreement about power: Musk believed that focusing control in a single, trusted leader was the safest path for AGI development. Altman and the other co-founders believed that concentrating that much power in one person, any person, was the most dangerous possible outcome.
That’s not a legal question. It’s a philosophical one. And the tension between those two visions is exactly what’s playing out in that Oakland courtroom.
Key Takeaways from Altman’s Testimony
- Musk’s departure genuinely improved morale. Altman was unambiguous: researchers felt relief when Musk left. The constant pressure, the ranking, the threat of being “chainsawed”, it all disappeared.
- Control was the central conflict. Altman testified that Musk wanted “total control” of any for-profit OpenAI entity and only trusted himself to make “non-obvious decisions.”
- The Tesla merger threat was real. Musk wanted to fold OpenAI into Tesla as its AI division. Altman believes that would have “destroyed” the company’s ability to follow its independent mission.
- Musk’s response about his children inheriting control was a turning point. The suggestion that OpenAI could become a Musk family asset crystallized the trust gap between the co-founders.
- Leadership style must match the work. The core insight of Altman’s testimony isn’t about Musk personally, it’s about the mismatch between aggressive, metrics-driven management and the patient, exploratory nature of deep research.
Frequently Asked Questions
Q: When did Elon Musk leave OpenAI? A: Musk resigned from OpenAI’s board in February 2018. The official reason was to avoid a conflict of interest with Tesla’s AI work, but court testimony has since revealed that a failed power struggle over control of the company was a major factor.
Q: What did Elon Musk say to OpenAI employees when he left? A: According to testimony, Musk told employees he saw no path forward for OpenAI and was going to focus on AGI at Tesla instead. He also said his focus wouldn’t be on AI safety, which generated a “strong, negative reaction” from staff.
Q: What is the Musk vs OpenAI trial about? A: Musk sued OpenAI and Microsoft, alleging that OpenAI abandoned its original nonprofit mission to benefit humanity. He is seeking $180 billion in damages and asking the court to restore OpenAI’s nonprofit status and remove Altman and Brockman from leadership.
Q: How much did Elon Musk invest in OpenAI? A: Musk invested at least $38 million in OpenAI’s early years, though some sources cite figures as high as $50 million.
Q: Does Elon Musk still own part of OpenAI? A: No. Musk has no ownership stake in OpenAI. He left the board in 2018 and later founded xAI, a competing AI company.
The Rift That Defines Modern AI
There’s something almost Shakespearean about the Musk-Altman story. Two brilliant, ambitious men came together to build something they believed would save humanity, and then split apart over the question of who should be in charge.
Altman’s testimony this week didn’t just score points in a courtroom battle. It surfaced a deeper truth about leadership, power, and the cultures we create. Musk’s departure from OpenAI wasn’t a loss for the company. By Altman’s account, it was a liberation, a moment when the people actually doing the work realized they could finally breathe.
“It was a morale boost in some ways,” Altman said. And really, isn’t that one of the most quietly devastating things one former partner can say about another?
The trial continues. The verdict, whenever it comes, will reshape the AI landscape. But regardless of which side wins, Altman’s words have already lodged themselves into the story of modern tech: sometimes the biggest favor a founder can do for a company is leave.
Comments
Post a Comment