
What to know : Security researchers warn that "LLM routers"—services that sit between users and AI models—are emerging as powerful attack points that can intercept and alter sensitive data. The team documented real-world abuses, including 26 routers secretly injecting malicious tool calls, stealing credentials and draining a client’s crypto wallet of $500,000. As industry leaders predict AI agents will soon mediate trillions of dollars in commerce and dominate crypto transactions, the researchers say the largely unregulated router infrastructure poses cascading, weakest-link risks to users’ funds and systems.
The cryptocurrency industry is racing toward a future where AI agents handle everything from booking flights to executing trades and making payments, but new research suggests the infrastructure underpinning that shift may not be secure.
McKinsey recently projected that AI agents could mediate $3 trillion to $5 trillion of global consumer commerce by 2030.
Coinbase founder Brian Armstrong said on X that “very soon” there will be more AI agents than humans making transactions on the internet. Binance founder Changpeng Zhao was more bold, predicting agents will make one million times more payments than people, all in crypto.
But a group of security academic and crypto researchers have released a paper explaining that a largely overlooked piece of AI infrastructure is already being used to steal credentials and even drain crypto wallets.
The authors of the papers are researchers affiliated with the University of California, Santa Barbara, the University of California, San Diego, blockchain firm Fuzzland and World Liberty Financial.
Powerful attack points
The team found that so-called “LLM routers,” or services that sit between users and AI models, can act as a powerful attack point exploited by malicious actors. These routers are designed to forward requests to models like OpenAI or Anthropic, but they also have full access to everything passing through them, including sensitive data.
“LLM agents have moved beyond conversational assistants into systems that book flights, execute code, and manage infrastructure on behalf of users,” the researchers wrote, highlighting how quickly these tools are taking on real-world financial and operational tasks.
The LLM routers or attack points leave users extremely vulnerable as they assume they are interacting directly with a reputable AI model such as OpenAI, Grok or otherwise, when in reality many requests pass through intermediary services that can see and modify that data, the researchers said.
According to one of the researchers, Chaofan Shou, the problem is no longer theoretical. He wrote on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet. We also managed to poison routers to forward traffic to us. Within several hours, we can directly take over ~400 hosts.”
“A malicious router can replace a benign command with an attacker-controlled one or silently exfiltrate every credential that passes through it,” the researchers wrote.
The researchers said that because these systems can operate autonomously, including frequently approving and executing actions without human review, a single altered instruction can immediately compromise systems or funds.
For crypto users, the implications are severe as private keys, API credentials and wallet access tokens often pass through these systems in plain text. The researchers found multiple cases where routers simply collected those secrets, the paper reveals. In one instance, a test Ethereum wallet was drained after its private key was exposed.
“Once exposed, credentials like private keys can be copied and reused without the user’s knowledge,” the authors of the paper noted.
Cascading risks
The team also demonstrated how easy it is to expand the attack. By “poisoning” parts of the router ecosystem, essentially tricking services into forwarding traffic, they were able to observe and potentially control hundreds of downstream systems within hours.
“A single malicious router in the chain is enough to compromise the entire system,” the researchers wrote, underscoring what they describe as a weakest-link problem.
That suggests a cascading risk of even if a user trusts their AI provider, the infrastructure in between may not be trustworthy, they stated in their paper.
That creates a potential mismatch as industry leaders increasingly predict AI agents will handle a growing share of crypto activity, while the underlying infrastructure still lacks guarantees that outputs haven’t been tampered with, they added.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。