When you buy through links on our articles, Future and its syndication partners may earn a commission.

Elon Musk is testifying in his lawsuit against OpenAI

He warned in court that unchecked AI could lead to a “Terminator” scenario and even threaten humanity

The case centers on OpenAI’s shift to a for-profit model and what that means for AI safety and control

Elon Musk used his time on the witness stand in his lawsuit against OpenAI to deliver a blunt warning about artificial intelligence.

“The worst-case situation is where it is a Terminator situation,” he said, describing what he believes could happen if the technology develops without sufficient safeguards.

The remark landed as part of his effort to frame the case as something more consequential than a dispute over corporate structure. Musk is suing Sam Altman and OpenAI leadership, arguing that the organization has drifted from its founding mission as a nonprofit designed to benefit humanity. In his telling, that shift is not just a matter of governance. It has implications for how quickly and how safely advanced AI is built.

In court, Musk leaned heavily on the idea that the stakes extend beyond balance sheets or boardroom control.

“The biggest risk would be that AI kills us all,” he testified. “That is the outcome we need to avoid, and it requires being extremely careful about how these systems are developed.”

Terminator AI

Musk, who helped found OpenAI in 2015 and later left, argues that the changes in the company represent a betrayal of the original agreement and intent behind the project. OpenAI disputes that characterization and says the shift was necessary to secure the resources required to compete in a rapidly escalating AI race. The company also points out that Musk has since launched his own competing AI venture, complicating his position as a critic.

That underlying disagreement has produced a trial filled with technical arguments about contracts and corporate governance. Yet Musk’s testimony has consistently pushed beyond those boundaries. He has tried to anchor his case in a larger narrative, complete with cinematic references.

“If we build the robots, I can make sure that they’re safe, and we don’t have a Terminator future situation,” Musk said.

The judge has shown some impatience with that framing. During testimony, Musk was encouraged more than once by the judge to focus more closely on the legal issues at hand.

OpenAI’s future

Musk’s legal team argues that OpenAI’s leadership effectively changed the nature of the organization without honoring the expectations of its early supporters. OpenAI’s attorneys counter that evolution was always part of the plan and that Musk’s interpretation is both selective and self-serving.

Musk’s emphasis on existential risk fits neatly into his strategy to highlight intent. His claim that OpenAI’s founding mission was about more than building successful products means its profit focus is against the initial agreement.

For the court, however, the decision will not hinge on cinematic imagery. The judge and jury are tasked with determining whether OpenAI violated agreements or misrepresented its intentions, not whether AI might build Arnold Schwarzenegger robots.

Google logo on a black background next to text reading 'Click to follow TechRadar'

Google logo on a black background next to text reading ‘Click to follow TechRadar’