DeepSeek’s success learning from bigger AI models raises questions about the billions being spent on the most advanced technology.
Experts say AI model distillation is likely widespread and hard to detect, but DeepSeek has not admitted to using it on its full models.
OpenAI claims to have found evidence that Chinese AI startup DeepSeek secretly used data produced by OpenAI’s technology to improve their own AI models, according to the Financial Times. If true, DeepSeek would be in violation of OpenAI’s terms of service. In a statement, the company said it is actively investigating.
OpenAI itself has been accused of building ChatGPT by inappropriately accessing content it didn't have the rights to.
DeepSeek faces allegations of using OpenAI's outputs to train its AI. Explore the legal, ethical and competitive implications of this dispute
The DeepSeek drama may have been briefly eclipsed by, you know, everything in Washington (which, if you can believe it, got even crazier Wednesday). But rest assured that over in Silicon Valley, there has been nonstop,
The San Francisco start-up claims that its Chinese rival may have used data generated by OpenAI technologies to build new systems.
Did DeepSeek violate OpenAI's IP rights? An ironic question given OpenAI's past with IP rights. What can we learn from this classic playbook to protect a business?
OpenAI believes DeepSeek used a process called “distillation,” which helps make smaller AI models perform better by learning from larger ones.
After DeepSeek AI shocked the world and tanked the market, OpenAI says it has evidence that ChatGPT distillation was used to train the model.
"I don't think OpenAI is very happy about this," said the White House's AI czar, who suggested that DeepSeek used a technique called distillation.