Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Odaily Exclusive Interview with Yu Xian: How Does the Leak of Anthropic's Nuclear-Level New Model Affect Crypto Security Defense and Offense?

CN
Odaily星球日报
Follow
4 hours ago
AI summarizes in 5 seconds.

Original | Odaily Planet Daily (@OdailyChina)

Author|Azuma (@azuma_eth)

An unexpected data leak has revealed to the world in advance a nuclear-grade product that Anthropic is about to launch.

Fortune reported last Thursday that the AI development company behind Claude, Anthropic, is training a new model called Mythos (internal code name suspected to be Capybara), and the company describes it internally as “the most powerful AI model developed to date.” Cybersecurity researchers who reviewed related materials revealed that the model was discovered in a draft blog post left in an unprotected, publicly searchable data cache (now inaccessible), and Anthropic confirmed the existence of the model after an inquiry from Fortune.

Anthropic describes Capybara as a new tier of models; compared to Claude's current strongest model tier Opus 4.6, Capybara has significantly improved scores in software coding, academic reasoning, and cybersecurity tests.

As early as last December, Anthropic conducted a test leveraging AI to autonomously attack cryptocurrency smart contracts, demonstrating that profitable and reusable AI autonomous attacks are technically feasible — see "Successfully Simulated Theft of $4.6 Million, AI Has Learned to Autonomously Attack Smart Contracts."

Now, with the emergence of a more powerful model with cybersecurity specialization capabilities, what changes will occur in the security offense and defense landscape of cryptocurrency? To thoroughly answer these questions, Odaily Planet Daily specially invited industry security expert and Slow Fog founder Yu Xian (X: @evilcos) to clarify.

AI's security threats come faster than you think

At the beginning of the conversation, Yu Xian directly stated, Many in the industry still regard AI's security threats as "in the future," but the reality may progress faster than the industry imagines — the impact of AI on cryptocurrency security is not coming soon, but has already begun to occur. In his view, the pathways through which AI impacts cryptocurrency security mainly fall into two categories.

The first category is attackers actively using AI for malicious purposes. This includes the rampant social engineering attacks within the cryptocurrency industry over the past two years, initiated through deepfake videos and voice fraud on social media; it also includes more “technical” direct attack plans, that leverage public vulnerability samples, real attack cases, and utilize AI to train methodologies for vulnerability discovery and exploitation — this is not limited to the field of smart contracts; any security process that can rely on historical experience for training and practice may become a domain for AI application.

The second category of risk is relatively easy to overlook now, but is more deserving of industry vigilance — projects developing with AI may inadvertently introduce new security issues into the system. As AI programming capabilities continuously upgrade, more project parties are beginning to rely on Vibe-Coding for code writing from the perspective of improving productivity. While the efficiency improvements are visible, the side effects are equally apparent; AI is prone to “hallucinations,” possibly introducing hidden risks directly into the production environment due to issues such as reliance on contaminated data, incorrectly installed packages, or erroneous code library references.

This is not alarmist. This past February, the lending protocol Moonwell was hacked for $1.78 million due to a faulty price feed formula, and the direct cause of the formula error was that the project relied on faulty code written with Claude Opus 4.6, incorrectly setting the price of cbETH at $1.12 when the actual price should have been around $2,200.

As AI reshapes the world comprehensively, it is not only a weapon in the hands of hackers but could also become a tool for project parties to “plant hidden dangers.”

Which projects are most likely to become prey in the AI era?

If AI has entered both offense and defense, the next question becomes very practical, who is more likely to get hit?

Yu Xian's judgment is straightforward, projects with large amounts of funds are always the top priority targets. The uniqueness of the cryptocurrency industry lies in the fact that protocols carry real money directly, and due to the principle of decentralization, the funding status of contracts is often transparent to the outside world. For attackers, the input-output ratio is always the first principle; therefore, as long as the TVL on a protocol is sufficiently large, it naturally enters the key attack list and will be subject to ongoing research, scanning, and probing by attackers.

Aside from large funding projects, another category of high-risk targets consists of newly launched projects with obvious vulnerabilities. Although the funding scale of such projects is limited, they often become victims of “rush attacks.” With AI's support, batch scanning, automatic identification, and automatic exploitation processes have become increasingly mature, and some new projects may draw simultaneous attention from multiple attack teams due to apparent or even simplistic vulnerabilities shortly after launch, before their funding scales have noticeably increased. In such cases, it is not about who is smarter, but who is faster. Whoever acts first may reap the rewards.

Yu Xian especially mentioned that another category of projects also deserves caution — those long-established protocols that have generated a false sense of "there should be no issues" in the market. A typical example is last year’s “failure” of the established protocol Balancer (reference: "Legacy DeFi Crumbles: Balancer V2 Contract Vulnerability Leads to Over $110 Million in Assets Stolen"). Many established projects have been operating without issues for years and have undergone multiple rounds of audits, creating an inertia among the team and users to perceive that "the system is sufficiently secure." However, the reality is that the more a protocol is considered “implicitly safe,” the more likely it becomes a target for long-term research and strategic breakthroughs by certain attack groups. If project responses slow down, governance processes become cumbersome, or if coincidentally, the team is on vacation or less attentive, losses incurred from exploitation could become more severe.

What should project parties and users do to defend?

During the dialogue, Yu Xian repeatedly stressed that project parties should more actively embrace AI. The reason is simple: external attackers are arming themselves with AI, and if you remain in the mindset of “just relying on traditional manual audits or that a system running for a long time should be fine,” you are essentially fighting a war with a massive information gap.

From the perspective of productivity development, “using AI to write code” is an inevitable trend, but the issue is that you cannot just think about enjoying the efficiency increases brought by AI without establishing matching security processes — the more AI is deeply integrated into the development process, the more rigorous cross-review and human validation mechanisms must be established before going live. For instance, using multiple AI models for cross-validation or involving individuals with true security experience and an understanding of engineering reliability in the final review.

In simple terms, it is about “Don’t lay back, be a bit more proactive.” Especially for projects with high TVLs and significant user fund deposits, they should actively combine the strongest model capabilities and security team capabilities available to upgrade their security strategies around the existing system. Even if not completely relying on AI, at least understanding what tools your opponent is using and how you should respond is important. This aspect will also positively impact user perception. A project willing to openly embrace AI security upgrades and continually conduct risk reassessments will at least inform the market that it does not view its historical achievements as capital to get lazy.

Compared to project parties that still have the ability to build systems, allocate budgets, and upgrade processes, ordinary users find themselves in a more passive position in the face of AI security offense and defense upgrades. Yu Xian candidly stated: “For the vast majority of retail investors, protecting themselves is indeed very difficult.”

Those truly capable of responding quickly and mitigating losses when risks occur are often not ordinary retail investors, but rather individuals who already possess strong information acquisition and on-chain operation abilities. They may have built their monitoring and alert mechanisms and may even use AI to automatically receive attack alerts. Once a pool or protocol exhibits anomalies, they can promptly withdraw funds and adjust positions, thereby achieving a certain level of loss mitigation. More aggressively, they might even profit from market sentiment during a security incident.

However, such individuals are actually not ordinary users but “scientists” in the context of cryptocurrency. For many users lacking monitoring abilities, response speed, and professional judgment, if a real attack occurs, they often end up being the last ones to pay the bill.

The reality is indeed harsh; the AI era will not automatically bring about a fairer security environment, but may instead further amplify disparities in information, tools, and response speed between professional users and ordinary users. From the perspective of ordinary users, what can be done is to minimize the time and positions exposed to high-risk protocols, reduce blind trust in complex interactions, and maintain a basic skepticism about narratives that “seem very safe.”

Will the arrival of more powerful models bring greater threats?

This is one of the most interesting questions from this interview. Intuitively, a model that is stronger in coding, reasoning, and cybersecurity would seem to make potential attackers more dangerous if it becomes operational. However, Yu Xian's response is that this is actually a good thing.

From Yu Xian's perspective, the industry's biggest misunderstanding now is to interpret such threats as “things that may happen in the future.” But the reality is that many stronger capabilities actually already exist; they are just not visible to the public (for instance, Mythos was also discovered by accident), or the truly capable teams are more low-profile than the market imagines.

In other words, the emergence of stronger models like Mythos does not necessarily mean risks are being born from zero to one, but rather allows the industry to more clearly realize that many previously only imagined attack capabilities have long been under research, verification, and even use in reality. Yu Xian mentioned in the interview that from vulnerability discovery to vulnerability exploitation, these are two distinct stages. Around these two matters, top model companies and some more vertical, deeper-dive teams (like teams that conduct full-scale private training on AI for smart contract security) may have already accumulated substantial results.

In Yu Xian's logic, stronger models are not purely bad news; they represent a more thorough filtering mechanism. If a project cannot even bear the challenges brought by AI, then it likely should not continue to grow in the future, as AI will increasingly fairly expose issues that have been previously obscured by luck, inertia, and information asymmetries. Projects that can truly remain are not those that “have not been attacked temporarily,” but those that can “endure attacks even in the AI era.”

This means that AI's influence on the cryptocurrency industry acts more like an acceleration of clearance. Vulnerabilities will be discovered more quickly, risks will be exposed earlier, and attacks will become more frequent. Projects with weak security capabilities, rough processes, and slow responses will be eliminated faster in the future.

In the long run, this may not be a bad thing. Because while AI expands the attack surface, it also raises the overall survival standards of the industry. It will force project parties to upgrade development processes, security systems, and response mechanisms, and will push the industry to thoroughly move out of the era of “barbaric growth.”

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

BitMart钱包:开启智能交易新时代
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Odaily星球日报

2 hours ago
Regulatory boots on the ground: SEC and CFTC shake hands and make peace, is the crypto market entering the "Age of Exploration"?
5 hours ago
The return rate of 1 dollar is only 43%, why are 87% of Polymarket players losing money?
6 hours ago
20 dollars per face, the "underground" business of encrypted KYC.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
2 hours ago
60 Selected Skills, Workflows, and Open Source Projects, The Most Comprehensive Claude Advanced List
avatar
avatarOdaily星球日报
2 hours ago
Regulatory boots on the ground: SEC and CFTC shake hands and make peace, is the crypto market entering the "Age of Exploration"?
avatar
avatarTechub News
3 hours ago
The real boundary of Hong Kong's device password rules: A reminder from the U.S. consulate in Hong Kong sparks "hardware wallet panic"?
avatar
avatarTechub News
3 hours ago
Robert Kiyosaki reveals the secrets of investor wealth.
avatar
avatar律动BlockBeats
3 hours ago
The speed at which AI finds vulnerabilities has already surpassed the speed at which it patches them.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink