Critics Mock Anthropic's Claims Chinese AI Labs Are Stealing Its Data

CN
Decrypt
Follow
3 hours ago

Anthropic has accused three Chinese AI labs of extracting millions of responses from its Claude chatbot to train competing systems, a move the company claims violates its terms of service and weakens U.S. export controls.


In a blog post published Monday, Anthropic said it identified “industrial-scale campaigns” by AI developers DeepSeek, Moonshot, and MiniMax to extract Claude’s capabilities through model distillation. The company alleged the labs generated more than 16 million exchanges using roughly 24,000 fraudulent accounts.


Anthropic’s announcement drew skepticism and mockery on X, where critics questioned its stance given how major AI models, including Claude, are trained, reflecting the broader ongoing debate over intellectual property, copyright, and fair use.



“You trained on the open internet and then call it ‘distillation attacks’ when others learn from you,” wrote Tory Green, co-founder of AI infrastructure firm IO.Net. “Labs that like to preach ‘open research’ suddenly crying about open access.”



“Ohhh nooo not my private IP, how dare someone use that to train an AI model, only Anthropic has the right to use everyone else's IP nooooo, this cannot stand!” another X user wrote.


Distillation is an AI training method in which a smaller model learns from the outputs of a larger one.


In cybersecurity contexts, it can also describe model extraction attacks, where an attacker uses legitimate access to systematically query a system and use its responses to train a competing model.


“These campaigns are growing in intensity and sophistication,” Anthropic wrote Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”


“Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers,” Anthropic wrote in a separate X post. “But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.”


In June, Reddit sued Anthropic, accusing it of scraping more than 100,000 posts and comments and using the data to fine-tune Claude.


The case joins lawsuits against OpenAI, Meta, and Google over the large-scale scraping of online content without permission.


“[There’s] the public face that attempts to ingratiate itself into the consumer’s consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets,” the Reddit lawsuit said.


Anthropic said it is expanding detection, tightening account verification, sharing intelligence with other labs and authorities, and adding safeguards to limit future distillation attempts.


“But no company can solve this alone,” Anthropic wrote. “As we noted above, distillation attacks at this scale require a coordinated response across the AI industry, cloud providers, and policymakers.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink