Elon Musk Appears to Agree xAI Has Used OpenAI Models to Train Itself


While testifying Thursday in federal court, Elon Musk appeared to indicate that his AI lab may have used versions of OpenAI to train its own xAI. He touched on the topic while taking the witness stand and answering questions from OpenAI’s attorney inside. legal action against the creator of ChatGPT.

Here’s the exchange, as WIRED would use it:

OpenAI attorney William Savitt: Do you know what distillation is?

Musk: It means using one type of AI to train another type of AI.

Savit: Has xAI done this with OpenAI?

Musk: Generally all AI companies (do that).

Savit: So yes.

Musk: A little bit.

Distillation is the process by which a smaller version of AI is trained based on the characteristics of a larger, more capable model, making it cheaper and faster to run while retaining most of its functionality.

OpenAI’s lawyer, William Savitt, asked if OpenAI’s technology had been used in any way to create xAI.

Savit: Has OpenAI technology been used in any way to develop xAI?

Musk: It’s a practice to use other AIs to validate your AI.

OpenAI and xAI did not immediately respond to WIRED’s request for comment.

OpenAI has been trying to prevent competitors from destroying its AI models, most notably Chinese AI lab DeepSeek. In February 2026 memo to the House committee, OpenAI wrote that it has “taken steps to protect and harden our models from corruption.” In a memo, OpenAI said it is focused on ensuring that the sector in which “China cannot advance autonomous AI by copying and repurposing American innovations.”

The Trump administration has also taken steps to prevent Chinese companies from destroying American AI models. Michael Kratsios, director of the White House office of science and technology policy, said in April 2026. memo that it will share information with US AI companies about foreign distillation. Kratsios said in a post on X that “the US government is committed to the free and fair development of AI technologies across a competitive environment.”

American AI labs have used each other’s AI models in other ways, to test progress and assess security. But in today’s competition, some AI companies have undercut their competing labs. In August 2025, Anthropic restricted access to OpenAI to its Claude models after the company said its rules had been violated. Recently, Anthropic hacked xAI to use its own AI models for coding as well.

In one of his many Musk interviews, Savitt asked Musk about his trying to take control of OpenAI and, later, his quest to beat the creator of ChatGPT. On Wednesday, Savitt provided emails and documents from 2017 to support a series of questions that Musk squeezed OpenAI by withholding funding and hiring key researchers.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *