Opinion: Post-analysis cannot prevent AI-driven cryptocurrency fraud.

CN
4 hours ago

Author: Danor Cohen, Co-founder and CTO of Kerberus

In 2025, cryptocurrency risks are surging like a flood. AI is accelerating scams. Deepfake pitches, voice cloning, synthetic customer service agents—these are no longer fringe tools but frontline weapons. Last year, cryptocurrency scams may have reached a historic high. Cryptocurrency fraud revenue hit at least $9.9 billion, partly due to generative AI-driven methods.

Meanwhile, over $2.17 billion has already been stolen in 2025—this is just data from the first half of the year. Personal wallets now account for nearly 23% of stolen fund cases.

However, the industry is largely still using the same outdated toolkit to respond: audits, blacklists, compensation promises, user awareness campaigns, and post-mortem analysis reports. These are all passive, slow, and ill-suited to threats that evolve at machine speed.

AI is a wake-up call for cryptocurrency. It tells us how fragile the current structure is. Unless we shift from fragmented responses to built-in resilience, the risk we face is not a price collapse, but a collapse of trust.

Scams involving deepfakes and synthetic identities have shifted from novelty headlines to mainstream strategies. Generative AI is being used to scale bait, clone voices, and deceive users into sending funds.

The most significant shift is not just a matter of scale. It is the speed and personalization of deception. Attackers can now almost instantly replicate trusted environments or personas. The shift to real-time defense must also accelerate—not just as a feature, but as a critical component of infrastructure.

Outside the cryptocurrency space, regulators and financial authorities are waking up. The Monetary Authority of Singapore (MAS) has issued a deepfake risk advisory to financial institutions, indicating that systemic AI deception has entered their radar.

The threats have evolved; yet the industry's security mindset has not changed.

The security of cryptocurrency has long relied on static defenses, including audits, bug bounties, code audits, and blacklists. These tools are designed to identify code weaknesses, not behavioral deception.

While many AI scams focus on social engineering, AI tools are also increasingly being used to discover and exploit code vulnerabilities, automatically scanning thousands of contracts.

The risks are twofold: technological and human.

When we rely on blacklists, attackers only need to create new wallets or fake domain names. When we depend on audits and reviews, vulnerabilities are already live. When we treat every incident as "user error," we absolve ourselves of responsibility for systemic design flaws.

In traditional finance, banks can block, reverse, or freeze suspicious transactions. In cryptocurrency, signed transactions are final. This finality is one of the hallmark features of cryptocurrency, but when fraud is instantaneous, it becomes a fatal weakness.

Moreover, we often advise users: "Don't click unknown links" or "Verify addresses carefully." These are acceptable best practices, but today's attacks often come from trusted sources.

No amount of caution can keep up with adversaries that continuously adapt and personalize attacks in real-time.

It is time to evolve from defense to design. We need transaction systems that react before harm occurs.

Consider wallets that detect anomalies in real-time, not only flagging suspicious behavior but intervening before harm happens. This means requiring additional confirmations, temporarily holding transactions, or analyzing intent: Is this for a known counterparty? Is the amount unusual? Does the address show a history of previous scam activity?

Infrastructure should support a shared intelligence network. Wallet services, nodes, and security providers should exchange behavioral signals, threat address reputations, and anomaly scores. Attackers should not be able to jump between islands unimpeded.

Similarly, contract-level fraud detection frameworks would review contract bytecode to flag phishing, Ponzi schemes, or honeypot behavior in smart contracts. Again, these are retrospective or layered tools. It is now crucial to integrate these capabilities into user workflows—shifting to wallets, signing processes, and transaction validation layers.

This approach does not require heavy AI everywhere; it needs automated, distributed detection loops and a coordinated consensus on risk, all embedded within transaction channels.

Let regulators define fraud protection frameworks, and we will ultimately be constrained. But they will not wait. Regulators are effectively preparing to regulate financial fraud as part of algorithmic oversight.

If cryptocurrency does not voluntarily adopt systemic protective measures, regulation will impose them—potentially by stifling innovation or enforcing rigid frameworks of centralized control. The industry can lead its own evolution or let it be legislated.

Our job is to restore confidence. The goal is not to make hacking impossible, but to make irreversible losses intolerable and extremely rare.

We need "insurance-grade" behavior: effectively monitored transactions with built-in fallback checks, pattern obfuscation, anomaly pause logic, and shared threat intelligence. Wallets should no longer be simple signing tools but active participants in risk detection.

We must challenge dogma. Self-custody is necessary, but not enough. We should stop viewing security tools as optional—they must be the default. Education is valuable, but design is decisive.

The next frontier is not speed or yield; it is fraud resilience. Innovation should not come from the speed of blockchain settlement but from their reliability in preventing malicious flows.

Yes, AI has exposed the weaknesses in cryptocurrency security models. But the threat is not smarter scams; it is our refusal to evolve.

The answer is not to embed AI in every wallet; it is to build systems that make AI-driven deception unprofitable and unfeasible.

If defenders remain passive, issuing post-mortems and blaming users, deception will continue to outpace defenses.

Cryptocurrency does not need to outsmart AI in every battle; it must surpass it by embedding trust.

Author: Danor Cohen, Co-founder and CTO of Kerberus

Related: Balancer audit under scrutiny after $100 million exploit

This article is for general informational purposes only and is not intended and should not be construed as legal or investment advice. The views, thoughts, and opinions expressed here are solely those of the author and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Original article: “Opinion: Post-mortems won’t stop AI-driven cryptocurrency fraud”

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink