Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

AI agents can be verified, but who will protect their privacy?

CN
PANews
Follow
1 day ago
AI summarizes in 5 seconds.

Author: Xiaobai

Title: DevRel at ETHPanda

This article is an original submission by the author, and the views only represent the author's personal understanding. ETHPanda edited and organized the content.

AI Agents are transitioning from "tools that can automatically perform tasks" to participants in on-chain economies. They may represent users in trading, participate in governance, invoke DeFi protocols, submit predictions in markets, and even accumulate reputation across multiple protocols.

But a key question arises: If Agents are to participate in open networks, what basis do others have to trust them?

ERC-8004 attempts to answer this question. It provides AI Agents with an open trust infrastructure, including identity registration, reputation records, and validation mechanisms. Through these components, Agents can possess portable on-chain identities, accumulate reputation across applications, and be subject to independent verification. It is important to note that ERC-8004 is still in the Draft stage, and related interfaces and names may still be subject to change.

This is crucial for the Agent economy. Without a unified layer of identity and reputation, it is difficult to establish long-term trust among Agents, between Agents and users, and between Agents and protocols. Each application would have to start from scratch to determine whether an Agent is credible, which would fragment the entire ecosystem.

However, the recently proposed ACTA (Anonymous Credentials for Trustless Agents) reminds us that the trust layer addresses the question of "how to prove" but does not fully solve the issue of "what is exposed in the proof process." It is worth noting that ACTA currently resembles a research draft and design direction rather than a completed standard implementation.

01 Verifiable does not mean everything should be public

On-chain, verifiability often implies public access.

If an Agent leaves identity, interactions, feedback, and verification records in the ERC-8004 registry, these pieces of information could be tracked by indexers over the long term. For ordinary applications, this might just signify transparency; but in DeFi, governance, prediction markets, and compliance scenarios, these public records could directly expose strategies, relationships, and business intentions.

Imagine a DeFi protocol utilizing multiple AI Agents to handle liquidity routing, risk assessments, and liquidation tasks. Every Agent call, every feedback record, and every task label could be reconstructed into an interaction graph by external observers.

This graph is not just metadata. It could reveal which models the protocol is using, which service providers it relies on, which strategies it prefers, and could even expose undisclosed business relationships.

The same issues would arise in governance and prediction markets. If an Agent votes on behalf of a user, evaluates proposals, or participates in predictions, the public interaction records might allow external observers to deduce the user's identity, political preferences, trading intentions, or organizational relations.

Therefore, the Agent economy cannot only discuss "how to establish trust," but must also address "which trust proofs should not be public."

02 The privacy layer that ACTA aims to supplement

ACTA is not intended to replace ERC-8004 but rather to serve as a privacy layer above it.

Its core idea is: Let Agents prove that they meet certain conditions without exposing underlying data.

For example, a protocol can require an Agent to prove:

  • It has passed a certain audit;
  • Its audit score exceeds a certain threshold;
  • It is using an approved version of the model;
  • The operators behind it are not in certain restricted jurisdictions;
  • It has sufficient historical reputation;
  • It is authorized by a verified human principal.

In traditional open blockchain design, an Agent might need to expose audit scores, model hashes, wallet addresses, feedback records, or operator information. However, ACTA aims to enable Agents to only prove "I meet this strategy" without disclosing "how I meet it."

In other words, validators do not need to know the complete identity and full history of the Agent, only whether it complies with the entry rules set by the current protocol.

03 Transitioning from "public identity" to "strategy proof"

The key shift of ACTA is moving the trust model from "public identity" to "strategy proof."

Within this framework, protocols can register a set of validation strategies. When Agents participate in a scenario, they do not directly showcase all credentials but instead submit a zero-knowledge proof demonstrating that they meet the strategy.

What on-chain validators see might just be the strategy ID, proof results, and a context-relevant nullifier. The nullifier's role is to prevent reuse or double voting, but it will not bind all of the Agent's activities across different scenarios to the same public identity.

This is particularly important for reputation systems.

If a user wants to leave feedback for a certain Agent, the system needs to prevent score manipulation and repeated evaluations. However, if every piece of feedback is tied to a public address, the interaction relationship between the user and the Agent would be permanently exposed. ACTA seeks to allow users to prove "I have indeed had valid interactions with this Agent and have not provided duplicate feedback," without revealing their address and full interaction history.

This enables reputation to be verifiable without transforming it into a publicly visible relationship graph.

04 Why is this important for AI Agents?

AI Agents are different from ordinary smart contracts.

Smart contracts are usually static code with relatively clear behavioral boundaries; whereas Agents resemble ongoing action entities. They may adjust strategies based on environmental changes or act on behalf of users across multiple protocols.

This means that the Agent's identity, authority, source of models, reputation, and authorization relationships become sensitive.

If in the future, users delegate tasks like trading, voting, research, liquidation, and quoting to Agents, the behavior trajectories of Agents are likely to become proxies for user intentions. Observing an Agent could indirectly involve observing the user.

This is also why ACTA discusses "on-behalf-of delegation": Agents may need to prove they are acting under the authority of a verified human principal but should not publicly disclose this person's real identity.

For DAO governance, this can help protocols distinguish between "Agents authorized by real participants" and "completely unrestrained bots." For DeFi, this can allow protocols to verify Agents' compliance and risk qualifications without exposing all business relationships to competitors. For prediction markets, this can reduce the risk of participants being reverse identified or strategies being copied.

05 ACTA remains an open question

Of course, ACTA currently resembles a research and design direction rather than a completed standard implementation.

The original text also mentions several issues that still need discussion, including the size of the anonymous set, the centralization risks of credential issuers, malicious Agents' thresholds for de-anonymization, cross-chain credential portability, and the cost and delay of client-generated proofs.

These issues are not trivial. Privacy systems can only be truly adopted by real protocols when the anonymous set is large enough, issuers are trusted enough, proof costs are sufficiently low, and developer experiences are good enough.

Otherwise, it might remain theoretically correct but difficult to enter production environments.

However, even so, the direction suggested by ACTA remains important. Because it points out a basic contradiction in the Agent trust layer: we need verifiable Agents, but we should not require Agents, users, and protocols to pay the excessive price of public exposure for verifiability.

06 What should the Chinese community focus on?

From the discussions in the Chinese community context, the insights from ACTA are not just about a new privacy technology proposal, but remind us to rethink the AI Agent infrastructure.

In the past discussions about the Agent economy, there was a common focus on model capabilities, automated execution, on-chain identity, and reputation systems. However, as Agents gradually enter financial, governance, and compliance scenarios, privacy will shift from being an "optional feature" to a "fundamental condition."

A truly usable Agent trust layer cannot merely answer:

"Is this Agent trustworthy?"

It must also answer:

"What information was exposed while proving trust?"

If all Agents' interactions, feedback, credentials, and authorization relationships are permanently public, then the on-chain Agent economy could become transparent but fragile. Transparency brings verifiability but may also lead to strategy leaks, relationship exposure, and identity association.

The value of ACTA lies in placing this issue on the table early.

ACTA is not yet a conclusion, but the questions it raises are worth discussing in advance: the future Agent economy should not be based solely on public identities and public reputations. It needs a layer of privacy-protecting proof mechanisms that allow Agents to demonstrate compliance with rules while preserving necessary identity, relationship, and strategy privacy.

When AI Agents begin to act on behalf of humans, privacy will not just be human privacy but will also become the security boundary of the Agent economy itself.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by PANews

11 minutes ago
Hyperliquid Multiple Benefits Explode on the Same Day: Coinbase Acquires USDH, CBRS Pre-Market Perpetual Contracts Become Popular
33 minutes ago
Exclusive Interview with Former Goldman Sachs FICC Executive: Semiconductor Shortages Favor the Latecomers! Buy as Many Optical Modules as Possible.
51 minutes ago
RWA Weekly: The CLARITY Act passes the Senate committee; Wall Street giants like BlackRock and JPMorgan accelerate the launch of tokenized financial products.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
4 minutes ago
Listed and then suspended, a single-day surge of over 108%, is Cerebras really the "next Nvidia"?
avatar
avatarPANews
11 minutes ago
Hyperliquid Multiple Benefits Explode on the Same Day: Coinbase Acquires USDH, CBRS Pre-Market Perpetual Contracts Become Popular
avatar
avatarForesight News
16 minutes ago
Spicy Review | The "King of Retail Investors" was actually hacked by his brother for "fraud"? AI "archeological digging" uncovers Bitcoin...
avatar
avatarTechub News
19 minutes ago
Google releases its first AI notebook: a revolution from operating system to intelligent system.
avatar
avatarForesight News
20 minutes ago
CLARITY's three underlying logics behind the alternative amendment, what big game is the United States playing with cryptocurrency regulation?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink