Elon Musk confirms xAI used OpenAI models to train Grok


In a California court on Thursday, Elon Musk testified that his AI startup, xAI, has used OpenAI models to make its own changes.

The issue in question is the distillation model, a common industry practice in which one larger type of AI acts as a “teacher” of sorts to provide information to a smaller type of AI, the “learner.” While it’s often used legitimately in companies that use one of their AI models to train another, it’s a practice sometimes used by smaller AI labs to try to get their models to mimic the behavior of a larger competitor.

Asked on the spot if he knew what distillation was, Musk said it was using one type of AI to train another. When asked if xAI has interfered with OpenAI technology, Musk appeared to avoid the question, saying that “mostly all AI” does this. And when asked if that was a yes, he said, “Maybe.”

When pressed, Musk said, “It’s about using other AIs to validate your AI.”

Model dilution has become increasingly common and has caused a lot of controversy among AI labs, in recent years, since the lines between what is acceptable – and what violates laws or company policies – often fall into a gray area. Companies such as OpenAI and Anthropic have criticized Chinese companies for interfering with their brands, and OpenAI to speak openly his concerns about DeepSeek, and Anthropic specifically mentioning DeepSeek, Moonshot, and MiniMax. Google, too he did something trying to avoid what is called a “distillation attack,” or “a method of stealing intellectual property that violates Google’s rules.”

In Anthropic itself blog post In the article, the company wrote, “Distillation is a widely used and accepted training method. For example, offshore AI laboratories regularly distill their samples to produce smaller, cheaper versions for their customers.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *