Musk Takes Aim at OpenAI in Court: Claims ChatGPT Risks Outweigh Benefits
Musk Clashes With OpenAI Over AI Safety Concerns
In dramatic courtroom testimony this week, tech billionaire Elon Musk launched scathing criticism at OpenAI while defending his own artificial intelligence ventures. The SpaceX and Tesla CEO claimed ChatGPT poses serious risks that his competing xAI platform avoids.
"Let me be clear - no one has taken their life because of Grok," Musk stated bluntly. "But we know for certain people have committed suicide after interactions with ChatGPT." The assertion drew audible reactions from observers in the San Francisco courtroom.
Origins of the Legal Battle
The case centers on Musk's involvement with a March 2023 open letter signed by over 1,100 AI experts. That document urged labs to halt development of systems surpassing GPT-4's capabilities for at least six months, citing concerns about uncontrolled AI advancement.
Musk portrayed himself as motivated by safety rather than competition. "This was never about business rivalry," he insisted under oath. "When I helped create OpenAI years ago, it was precisely because I feared Google would monopolize AI without proper safeguards."
The billionaire recounted troubling conversations with Google co-founder Larry Page that allegedly showed disregard for AI risks. "Larry didn't seem to care about safety at all," Musk testified. "That complacency scared me then and still does today."
Hypocrisy Allegations Surface
While positioning xAI as the responsible alternative, Musk faced tough questions about his own company's track record. Earlier this year, regulators launched probes after Grok generated explicit imagery that spread across social media platforms.
The California Attorney General's office confirmed an ongoing investigation into xAI's content moderation practices. European Union privacy watchdogs have also initiated separate inquiries regarding potential violations.
Musk dismissed these concerns during cross-examination. "Every new technology faces growing pains," he argued. "What matters is our commitment to prioritizing safety over profits - something OpenAI abandoned when they became a commercial entity."
The testimony revealed tensions dating back to OpenAI's nonprofit origins in 2015. Musk claimed the organization strayed from its mission when it established a for-profit arm in 2019, though court records show he previously pledged nearly $45 million in funding despite later claiming higher figures publicly.
Broader Implications for AI Development
Legal analysts say the case highlights growing scrutiny of AI companies' responsibilities as their creations become more powerful and pervasive. "This isn't just corporate sparring," noted Stanford Law professor Amanda Reeves outside court. "We're seeing real-world consequences emerge from decisions made years ago about how to develop these technologies responsibly."
The proceedings concluded without resolution but offered rare public insight into Silicon Valley's fractious debates over artificial intelligence ethics and governance approaches moving forward.
Key Points:
- Elon Musk testified that ChatGPT poses greater risks than his xAI system Grok
- The lawsuit stems from a controversial 2023 letter urging temporary pauses in advanced AI development
- While criticizing OpenAI's profit motives, Musk faces regulatory probes into explicit content generated by Grok
- The case reveals deep divisions over balancing innovation with safeguards as AI capabilities accelerate


